Children have a right to a non-violent upbringing. In Germany, physical and mental injuries, neglect and abuse are legally considered to be a threat to the well-being of children. This can be caused by a certain behaviour or omission on the part of the parent or guardian. According to Destatis, 197,800 risk assessments were initiated in 2021. Most of them came from the police and judicial authorities, many from relatives, neighbours or anonymous tips. Staff in German youth welfare offices examined these reports: In 30 per cent of the reported cases, the suspicion of child welfare endangerment was confirmed.
“Professionals in youth welfare offices are under pressure to weigh parental rights against the best interests of the child. They have to make ethical and legal decisions in morally and emotionally charged situations. Within a short time, with limited resources,” explains Professor Michael Reder from the Munich School of Philosophy. He is head of the bidt-funded project “Can an algorithm calculate morally in conflict?” or KAIMo for short.
Research project
Child welfare endangerment: a difficult decision
In the decision-making process about a possible risk to a child’s well-being, the professional competence and experience of child and youth welfare workers are essential. The question is: Will a child be temporarily taken into care? This is a blatant intervention in the family, which serves to protect the individual child. To support this work, clear guidelines and checklists are indispensable, as is the exchange among social workers. The decision of the youth welfare office on taking a child into care must be determined by the courts in the final instance.
But how can artificial intelligence (AI) support the demanding work of youth welfare offices? Project coordinator Dr Christopher Koska from the Munich School of Philosophy explains: “Professionals in social work know the history of an individual case, the background, the network of actors. An AI-based assistance system can both facilitate the processing of explicit information from the case files and query the professionals’ implicit knowledge.” Because with every new piece of information, the situation in the child’s individual case changes. That is why the interdisciplinary bidt project team is focusing on a digital procedure that can be used to locate blind spots: Is there no information at all in a child’s case file on certain aspects?
Acceptance of AI in social work
Youth welfare offices are basically open to the use of supporting software, but view AI technologies critically. Christopher Koska reports: “We evaluated existing tools in the first project phase. Many are too narrowly conceptualised and go very much in the direction of: taking decisions away.” Michael Reder adds: “There are systems on the market where you enter some characteristics and they recommend: Take child out of family. AI technologies in such sensitive fields should not be developed and used naively.” Therefore, the project team explicitly strives for a reflected AI system that systematically collects, evaluates and processes information and thus supports decision-making processes in youth welfare offices.
An assistance system must also be sensibly fitted into the workflows in the youth welfare office. “We ask: What increases acceptance in everyday work?” says Michael Reder. “That’s why we looked at systematic visualisation, for example.” This aspect was reported back by the professionals in the youth welfare office themselves: visual highlighting, e.g. of text sections in case files, helps with information processing and supports collegial cooperation. “Social workers also said it would be helpful if the machine could talk to them, i.e. convey information auditorily,” reports Christopher Koska.
The more anthropomorphic the AI systems, the better the acceptance in everyday work.
Dr. Christopher Koska To the profile
Structure of the bidt project
Conceptualisation: What happens in the youth welfare offices?
“In the first phase of the project, we looked at the processes in selected youth welfare offices,” says Michael Reder. “We wanted to understand which instruments social work uses. In addition, it was important for us to understand the conflict in child and youth welfare: the image of family as a system to be protected and the understanding of the state, as an ‘intervener’ in this system.”
Programming: developing an assistance system
The researchers are currently programming an AI-based assistance system that is based on the everyday work of professionals in youth welfare offices. Soon, a tool will be available that can be used to better visualise information. Professor Nicholas Müller and Maximilian Kraus from the Chair of Socioinformatics and Social Aspects of Digitalisation at the University of Applied Sciences Würzburg-Schweinfurt are also involved in the bidt project. Together, the research team is asking the question: How can we best train AI?
Implementation: AI system in use
In the third and final project step, the researchers will implement their assistance system in youth welfare offices and evaluate its use.
Digitisation of social work – a special field
What is special about social work is that artificial intelligence has so far rarely been used in such a morally, legally and emotionally highly conflictive field. “We ask whether it makes sense at all to use AI technologies under these conditions,” says Michael Reder. “Of course, fear runs along in the youth welfare offices: Does the machine want to determine us? Is it supposed to make predictions?” However, the research team of the bidt project is by no means concerned with “a machine taking over the youth welfare offices one day”.
Our goal is to explore the opportunities and challenges of AI-based decision-making systems in highly conflictive situations. To better understand which software applications we can usefully want.
Prof. Dr. Michael Reder To the profile
Professor Robert Lehmann and Jennifer Burghardt from the Institute for E-Counselling at the Nuremberg University of Applied Sciences contribute the perspective of social work. The interdisciplinary project team wants to find out: At what point in the decision-making and work process is the use of AI technology useful in youth welfare offices?
Protecting sensitive data, using synthetic data
The research team faces several challenges: Data protection is important both in AI research and in the programming of AI systems. Especially when it comes to protecting the personal data of families and children. But in order to train an AI, developers are dependent on sufficient data sets. “In our cooperation with youth welfare offices, we discovered that data collection is enormously complex,” reports Christopher Koska. “That’s why we asked youth welfare offices nationwide to support us in creating synthetic data.”
Synthetic data are generated, artificially created case files. For this, the researchers sent short descriptions of fictitious cases of child welfare endangerment to youth welfare offices. “We wanted to know: What does a social worker write at this point? How do they formulate something?” says Christopher Koska. “We use this synthetic data in combination with real, anonymised case files to train the AI.” In the process, he says, the comparison of the data is exciting: is it possible to generate normativity via synthetic data and specifically counteract a bias?
Algorithmic bias: counteracting bias
Synthetic data, such as the case files artificially created in the bidt project, could inhibit algorithmic bias. Algorithmic bias arises, for example, from erroneous data that causes discrimination against certain groups of people. When creating synthetic data, direct attention can be paid to excluding this bias.
“We have to ask in this context: are there biases among social workers? This is important because the case file we take to train the AI has already been created from a certain perspective. The professional may have already formulated an implicit decision,” says Christopher Koska. Of course, social workers learn awareness of implicit prejudice in their studies and training. If prejudices still come into play, a technical system could help make them visible.
Digitisation of child and youth welfare: opportunities and limits
As an interim conclusion of the bidt project, Michael Reder states: “The conceptualisation of AI systems must be tied back much more to practice in conflictive fields and developed with the people who use them. In our case, the social workers. Actually, with the families as well.” Especially in highly conflictive fields such as child and youth welfare, it is also important not to “rush technical developments onto the market”. AI technology should be understood as an assistance system and not as a decision-making system. “The interdisciplinary perspective is valuable for this,” Michael Reder summarises. “Because if we look at such applications purely in economic terms, we don’t ask important questions. For example: How do we want to shape the digital world of tomorrow? Also in the microcosm of the youth welfare office.”