|Pittsbugh's predictive analytics algorithm slaps a "scarlet number" risk score |
on every child who is the subject of a report alleging neglect.
And they're trying to do it to every child at birth.
Yesterday’s post to this blog noted that in Australia, an algorithm that wreaked havoc in the lives of poor people receiving public assistance was ruled illegal after it had a failure rate of 20 percent. But apparently, in America, if you just whisper the words “child abuse” in everybody’s ears, even a failure rate of up to 99.8 percent is o.k.
Here’s the thing about those predictive analytics algorithms that supposedly can predict who is going to abuse a child: They tend to fail spectacularly.
How AFST works
AFST then consults a vast treasure trove of data gathered disproportionately on poor families without their informed consent. It then coughs up a risk score between 1 and 20 - a "scarlet number" that can wind up haunting a child for life. And they're trying to do it to every child at birth.
Twenty is a risk level so high it literally flashes red on the screens of those responsible for deciding whether to send out an investigator - and the humans are strongly discouraged from overriding the algorithm.
In other words, the odds of impurity in a bar of Ivory Soap are greater than the odds of a child labeled extremely high risk by AFST turning up at the Children's Hospital ER with an abuse-related injury.
New research co-authored by Rhema Vaithianathan [the co-designer of AFST] and Diana Benavides-Prado confirms that children identified as at risk by the Allegheny Family Screening Tool, … are also at considerably heightened risk of injury, abuse and self-harm hospitalisation.
So what about the other 99.8 percent? Unless a human overrides the algorithm, the worker must go to the door, demand entry, search the entire home, poking into cabinets and cupboards and refrigerators (which, at the moment also means increasing the risk of spreading or contracting COVID-19). They must interrogate every member of the family, often an enormously traumatic experience for a child. And they may well stripsearch the children.
Roughly 99.8 percent of the time, it will be for nothing. In addition to inflicting all that needless trauma, workers wasted time that could have been used to find the very few children in real danger.
No doubt AFST proponents will argue that the 99.8 percent figure applies only to the most serious injuries – injuries that involved “hospitalization.” The statement touting the results repeatedly uses that term, with no further explanation.
It could also be argued that some injured children might show up at other ER's. But most of them, especially those whose injury is serious or where there's a suspicion of abuse, are likely to wind up at the hospital where the study was done. That's because, according to the authors, "UPMC Children's Hospital is the sole provider of secondary [meaning specialized] care for children in the Allegheny County area."
But even if you give AFST so much benefit of the doubt that we assume it's really five times more accurate than this study revealed, that still would mean AFST is wrong 99 percent of the time.
No amount of tweaking the algorithm is going to improve this track record – and, in one sense, that’s good news. What the study really shows, once again, is that “child abuse” of the sort that comes to mind when we hear those words actually is extremely rare. Think of it: Of those children AFST rated at the very highest risk, up to 99.8 percent did not experience a child abuse injury that required either hospitalization or an ER visit.
Much harm can be done under the umbrella of good intentions, because big data is a big weapon. … the concerns about the accuracy of the algorithm deployed should be of paramount importance, since the thread of historic biases in large data sets has become increasingly apparent. … The child abuse literature reports that both the evaluation of suspected abuse and subsequent diagnoses can contain racial biases. …
the policy principle of proportionate universalism—broadly providing services or resources without targeting specific families or people. The target of strategies to decrease rates of child maltreatment would be better directed to community-based strategies that support children and families facing adversities and living in poverty.