New AI tool boosts accuracy of child abuse detection in hospitals

16 hours ago 3

Artificial intelligence is changing how doctors detect child abuse in hospital emergency rooms. New research shows that AI tools can better identify when a child has been physically harmed. These tools look beyond basic hospital records and help doctors get a clearer picture of what’s really happening.

The study focused on physical abuse seen in children under age 10, especially those younger than 2. Doctors often miss signs of abuse because they rely too much on codes entered in medical records. These codes often don’t tell the full story. In fact, the study found that depending only on abuse codes led to misdiagnosis in about 8.5% of cases.

Farah Brink, MD, a child abuse pediatrician, explains how this new method could help doctors respond faster and more accurately. “Our AI approach offers a clearer look at trends in child abuse, which helps providers more appropriately treat abuse and improve child safety,” she said. Brink also teaches at Ohio State University and works at a children’s hospital known for treating young patients in crisis.

Relying on abuse codes alone misdiagnosed on average 8.5% of cases. (CREDIT: Pexels)

What the AI Model Does Differently

Doctors and hospitals often use a system called ICD-10-CM to classify injuries and illnesses. But this system doesn’t always catch signs of physical abuse, especially in busy emergency departments. Many injury codes are general. They don’t always show whether the injury happened by accident or on purpose.

To solve this problem, researchers trained a machine learning model to analyze a wider range of data. They looked at more than 3,300 emergency room visits across seven children’s hospitals between February 2021 and December 2022. The study included only children younger than 10 who had been evaluated for abuse by a child abuse pediatrician. Nearly 75% of those children were younger than 2, and over half were younger than 1.

Instead of looking only at whether a case was labeled as abuse in the records, the AI model also examined injury codes. It used a type of analysis called LASSO logistic regression. This helped predict whether the injury was caused by abuse, even when the hospital did not label it that way.

Better Accuracy, Fewer Mistakes

The study’s findings highlight how much more accurate AI can be. In 43% of all hospital visits analyzed, doctors had added at least one abuse-specific code. But when researchers compared the abuse codes to what child abuse experts had concluded, they found a big gap.

In cases where abuse-specific codes were used, the actual rate of confirmed abuse was 63.4%. But many cases without these codes also turned out to involve abuse—12.7% of them, to be exact. This shows that codes alone can miss real abuse.

Across all seven hospitals, estimates based only on abuse codes were off by as much as 14.3%. On average, they missed the true rate by 8.5%. When the AI model was used instead, the errors dropped sharply. The new method had an average mistake rate of just 1.8%.

ROC curves for predictive models. (CREDIT: Farah Brink, et al.)

In six of the seven hospitals, the AI model gave more accurate results than using abuse codes alone. At one hospital, the error rate increased slightly, but only by 0.6%. This shows that the model works well across different locations and patient groups.

Why This Matters for Children

Quick and accurate identification of abuse is essential. The earlier doctors can spot patterns of physical harm, the sooner they can step in and protect children. Right now, emergency rooms are fast-paced places where doctors have little time to investigate each injury deeply. That’s why better tools are needed to support their decisions.

By giving doctors stronger data, AI can help them recognize abuse even in children who don’t come in with obvious signs. This is especially important for babies and toddlers who can’t talk or explain what happened to them.

“AI-powered tools hold tremendous potential to revolutionize how researchers understand and work with data on sensitive issues, including child abuse,” said Dr. Brink.

Building a Smarter Safety Net

The researchers used records from a trusted network of child abuse experts, called CAPNET. They focused on cases rated 5 to 7 on a 7-point abuse likelihood scale. These ratings came from trained specialists who carefully studied each child’s injury, behavior, and medical history. These expert decisions were then used as the “truth” to compare against the AI model and the hospital codes.

By linking emergency room visits with expert evaluations, the researchers could test how well each method performed. They found that when hospitals relied only on administrative codes, they often overestimated abuse in some places and underestimated it in others. The AI model gave results much closer to the expert opinion.

Abuse prevalence estimates vs. true values. (CREDIT: Farah Brink, et al.)

This matters because decisions based on flawed data can have real consequences. Children may be returned to unsafe homes, or families may be unfairly accused. Tools like this AI model can reduce those risks by offering a clearer, evidence-based view.

With better estimates of how often abuse occurs, hospitals and public health officials can also plan better. They can see where help is needed most and design programs that prevent harm before it happens.

Looking Forward

This study is just the beginning. The researchers plan to keep improving their model with more data from other hospitals. They also hope to use similar tools to spot other forms of child maltreatment, such as neglect or emotional abuse.

For now, the findings show that smart technology can play a powerful role in child safety. It can support doctors in making better decisions, even in the most stressful situations. As more hospitals start using these tools, children will have a better chance of getting the care and protection they need.



Read Entire Article