Tuesday, February 8, 2022

Cutting through the spin about predictive analytics in child welfare

The Scarlet Number: Allegheny County (metropolitan Pittsburgh) has been
trying to slap a "risk score" on every child at birth. The score could haunt
them their entire lives.

In Allegheny County, Pa., even the county’s hand-picked ethics reviewers had reservations about the county’s Orwellian “Hello Baby” algorithm.  A key feature of the program flunked one reviewer’s ethics test. 

Second of two parts.  Read part one here.

Yesterday’s post to this blog discussed the amazing good fortune of Emily Putnam-Hornstein, America’s foremost evangelist for using “predictive analytics” to advise family policing agencies concerning everything from who should get “preventive services” to which children should be torn from their parents’ arms. (Another term for this is “predictive risk modeling” (PRM), but a better term than either is computerized racial profiling.) 

It seems that whenever Putnam-Hornstein co-authors an algorithm, the people chosen to do an “independent” ethics review are predisposed to favor it.  At a minimum, they seem to be ideological soulmates.  Sometimes they’ve co-authored papers with Putnam-Hornstein herself or with someone who wrote an algorithm with her. 

But even with the deck so stacked, in one case the ethics reviews offered some strong cautions–including suggesting that a key part of the program for which the algorithm would be used is unethical.  Though generally the reviews were favorable, the reviewers’ concerns were so serious that the agency that commissioned the reviews, the Allegheny County, Pa., Department of Human Services, went to great lengths to spin the results and direct readers toward the spin instead of the reviews themselves.  

The algorithm in question is the second of two in use in Allegheny County. 

The first, the Allegheny Family Screening Tool (AFST) stamps an invisible “scarlet number” risk score on every child who is the subject of a neglect allegation screened by the county’s child abuse hotline.  The higher the score, the greater the supposed risk.  Even though the ethics review for that one was co-authored by a faculty colleague of one of the creators of the algorithm, it cautioned that one reason AFST is ethical is that it does not attempt to stamp the scarlet number on every child at birth – something known as “universal-level risk stratification.” 


This is so Orwellian that even other family policing agencies can’t stomach it.  As noted in yesterday’s post, about Putnam-Hornstein’s work in California, the California Department of Social Services declared that 

The Department believes that “universal-level risk stratification” is unethical and has no intention to use it now or in the future. Identifying and proactively targeting services to families with no [child welfare services] involvement is a violation of families’ privacy and their rights to parent as they see fit. This would be an overreach in the roles and responsibilities of a government agency. 

So when Allegheny County decided that, ethics-be-damned, it wanted an algorithm to do exactly what appalled their counterparts in California, and exactly what their own prior ethics review implied would be unethical, the solution was obvious: Commission another ethics review! 

In fact, they commissioned two (or maybe three) – one of them from an ideological soulmate of the co-author of both Allegheny County algorithms -- Putnam-Hornstein.  

Sure enough, the county got much of what it wanted.  But the reviews displayed far more nuance than the county apparently expected, going into detail about serious problems with this approach, even as they claimed these obstacles could be overcome. 

So the county went into full spin mode.  In 2019, its first publication about the new algorithm, part of a program called “Hello Baby” merely declared that the ethics reviews existed, implying that Hello Baby got a seal of approval – but with no link to the documents themselves. 

A year later, the county put out its own summary of the ethics reviews. Although at last the actual reviews were posted online, there were no links from the county’s summary – and the reviews remain harder to find.  As we noted in our previous post, it’s sort of like the way Donald Trump’s attorney general, Willam Barr, handled the Mueller report.  In the case of the Allegheny County algorithm, the gap between the actual documents and the spin isn’t as wide – but it still tells an interesting story. 

So let’s look closely at the parts of those reviews that Allegheny County, and Putnam-Hornstein, probably least want you to notice. 

● The first thing to notice is that one of the two published reviews may never have been completed. It’s labeled a draft. 

● The second thing to notice is that the draft refers to itself as “one out of three perspectives from cross-disciplinary researchers looking different aspects of the risk-scoring system that Allegheny County plans to deploy.” [Emphasis added.]  But the county has only published two, and only ever refers to two.  What happened to the third? UPDATE, FEB. 22: Responding to an email query from NCCPR, Erin Dalton, director of the Allegheny County Department of Human Services, says there were only two ethics reviews. She said the draft may have been referring to a separate review of methodology and data science.

The other published ethics review strongly suggests a key feature of  Hello Baby – the fact that you’re in it unless you remember to opt-out – is unethical.  The review sets criteria for such a feature to be ethical. Hello Baby doesn’t meet the criteria. 

Selling Hello Baby 

There are two key selling points for Hello Baby: One, it’s supposedly a purely voluntary program, two, the vast troves of data will be used only for targeting prevention.  We’ll start with the second. 


Child abuse investigations are run by another division of the same agency that oversees Hello Baby.  Both divisions of this same agency are ultimately overseen by Erin Dalton, who is as nonchalant about the harm of foster care as she is fanatical in her desire to vacuum up data about poor people.  Nevertheless, Dalton’s agency publicly promises that child abuse investigators won’t see the Hello Baby risk scores or other data from that program.
 

One of the ethics reviewers, Prof. Michael Veale of University College, London, saw the problem. It turns out, there’s even a name for it: Function Creep.  He writes: 

One underlying anxiety concerning predictive systems in the public sector is that by virtue of being created for one task, they establish an infrastructure consisting of many aspects—including data, technology, expertise and culture—which might expand beyond its original scope into areas its original democratic and societal mandate did not permit. …

Some will be concerned that while [using the Hello Baby risk score only for prevention] might be the policy today, it might not be robust to change in the future. Similarly, those who might have lost trust in a public service more generally might not trust assurances that this inferred data is deleted or not passed onto other actors in the system. 

Veale suggests that the county come up with 

some legally binding declaration … delimiting the purposes of this system in advance to a sufficiently narrow scope and set of actors. This agreement would then serve as a mechanism that could be used to hold future uses of this model to account—at least insofar as it would have to be actively and ideally publicly removed before the purposes of a score or a model could change. 

This appears based on the naïve assumption that, were Allegheny County to want to use Hello Baby for child abuse investigations, the shame of having to go public might be a deterrent. 

On the contrary, when – not if, because it’s going to happen – the data are used to decide who to investigate as a potential child abuser and when to take their children it will be done with pride and fanfare.  Because here’s how it will happen: 

A three-year-old boy, call him Jason, is killed by his father.  Jason was “known to the system,” a previous allegation had been deemed unfounded.  Somebody leaks the fact that Jason’s father had a high risk score using Hello Baby.  The caseworker who investigated the father gives a tearful television interview in which she says: “If only I’d known that Hello Baby thought he was high risk, I never would have left the child in that home.” 

At that point three things happen: 

● A member of the Pennsylvania Legislature introduces “Jason’s Law,” a bill requiring that information from Hello Baby and anything else in the state like it be fully shared with child protective services.  He calls it “Jason’s Law” of course. 

● Erin Dalton or her successor calls a news conference and declares that the Allegheny County Department of Human Services isn’t about to wait for the legislature – they’re ordering full information sharing right now!  

● There are warnings that algorithms that predict terrible harm will come to a child, including AFST, have a record of being wrong more than 95% of the time  - potentially flooding the system with “false positives” that do enormous harm to innocent families and make it harder to find the few children in real danger.  The warnings are ignored. 

A pinky swear is not enough. 

Having raised an urgent concern, Veale comes up with a solution that has all the enforceability of a pinky swear – or maybe something more like this: 

 


There’s still another danger.  Anyone Hello Baby labels high-risk will be offered a series of services not offered to anyone else.  At the highest alleged level of risk, the program calls it “relentless engagement.”  Therefore, the service provider, who will be regularly coming into the home to engage relentlessly will know from day one that a high-tech algorithm has branded these parents high risk for abusing their children.  That service provider almost always will be a mandated reporter, required to report any suspicion of child abuse and neglect (and in Pennsylvania, the training curriculum is fanatical about urging reporters to report! Report! Report!) 

So even the other ethics reviewer, Deborah Daro, a Senior Research Fellow at Chapin Hall, and an ideological soulmate of Putnam-Hornstein expressed concern about this.  She writes: 

All home visitors report a proportion of their participants to child protective services. … The [Predictive Risk Model] gives service providers additional information on a family’s history that may alter the way workers interpret the conditions they do observe. Even if the exact details regarding a family’s history is [sic] not provided to program staff or other providers, the fact parents have been identified through the PRM as being at high-risk will convey a general profile of concerns. As such, key  implementation questions for the county to address include: 

• How might knowledge of a family’s prior history with the child welfare and justice systems impact a provider’s judgment regarding current relationships in the home and the ability of other caretakers (particularly the father) to appropriately care for the infant? 

• How does this knowledge impact how providers might interpret a mother’s actions – will they be less forgiving of minor concerns they observe? 

• Will knowledge of a family’s history increase the likelihood a provider will report the family to child welfare as a potential risk for maltreatment if the family drops out or refuses additional program services? … 

Heightened awareness of a family’s circumstances may create surveillance bias, resulting in a higher probability of a family being reported. Providers will know more about a family and will need to weigh this knowledge against a family’s willingness or reluctance to remain in the program. 

Notice Daro’s own bias here.  Allegheny County brags that all services provided under
Hello Baby are purely voluntary and families are free to drop out at any time.  But Daro seems to think exercising that right is still another reason for heightened suspicion.
 

Having raised the surveillance bias issue, Daro then cops out, suggesting the same failed solution that proponents of the child welfare surveillance state fall back on whenever the harm they do comes to light: We’ll fix it with more “training.” 

Defining “voluntary” 

Another key element of the selling of Hello Baby is the claim that it’s purely voluntary.  Technically yes, but you’d better be very sharp and wide awake during the first days and hours of your baby’s life to avoid being forced into the program – and isn’t everyone wide awake and able to absorb everything during that time? 

Because Hello Baby forces you in, unless you affirmatively opt out.  And you get only two chances to opt out.  The first chance is while you and your newborn are still in the hospital.  Amidst everyone else coming and going and handing you forms and discharge papers and God-knows-what else, you are given an information packet selling Hello Baby that also tells you how to opt out.  The second, and last, chance comes in the form of a postcard sent to your home – it’s not clear when, but presumably very soon after coming home with your baby.  You have to mail it back.  Miss those chances and Allegheny County has free reign to dig up all the electronic dirt on you that is called for in the algorithm and slap a risk score on you and your baby.  The score can follow you, and your child, forever. 


Oh, you can drop out of any services offered under the program at any time – though, as noted above that might prompt the service “provider” to call the child abuse hotline on you – but you never again get a chance to opt out of data collection or make them delete the data they’ve already gathered.
 

Here’s what Daro writes about when this approach, called “passive consent,” is ethical and when it is not: 

This approach is considered appropriate only if the intervention or strategy involves minimal risk to the participant and if obtaining written approval for the procedure is not practical or feasible. [Emphasis added.] It is not clear if this approach has already been approved by the county’s Institutional Review Board. If it has, then the approach has been judged appropriate in this instance. If it has not, the county will need to make the case as to why it is not asking parents to “opt in” for the screen. 

It is just as “practical and feasible” to presume someone is not in the program until they check a box saying they’re in, as it is to presume they’re in until they check a box that says they’re out.  So by Daro’s own criteria, this key aspect of Hello Baby is unethical. 

And the county’s response illustrates perfectly why putting all this data power in their hands is so dangerous. They respond that: 

The “passive consent” is only for running the PRM, which commits clients to nothing. 

After all, the county continues, families still don’t have to accept the “services.”  But, of course, allowing the county to run the PRM commits the family to surrendering vast amounts of personal data that can then be turned against them at any time. That’s hardly nothing. 

As for Daro’s stipulation that this aspect of Hello Baby should be approved by the county’s Institutional Review Board, the Allegheny County Department of Human Services replied: 

Allegheny County does not have an institutional review board.