Showing posts with label Erin Dalton. Show all posts
Showing posts with label Erin Dalton. Show all posts

Tuesday, February 8, 2022

Cutting through the spin about predictive analytics in child welfare

The Scarlet Number: Allegheny County (metropolitan Pittsburgh) has been
trying to slap a "risk score" on every child at birth. The score could haunt
them their entire lives.

In Allegheny County, Pa., even the county’s hand-picked ethics reviewers had reservations about the county’s Orwellian “Hello Baby” algorithm.  A key feature of the program flunked one reviewer’s ethics test. 

Second of two parts.  Read part one here.

Yesterday’s post to this blog discussed the amazing good fortune of Emily Putnam-Hornstein, America’s foremost evangelist for using “predictive analytics” to advise family policing agencies concerning everything from who should get “preventive services” to which children should be torn from their parents’ arms. (Another term for this is “predictive risk modeling” (PRM), but a better term than either is computerized racial profiling.) 

It seems that whenever Putnam-Hornstein co-authors an algorithm, the people chosen to do an “independent” ethics review are predisposed to favor it.  At a minimum, they seem to be ideological soulmates.  Sometimes they’ve co-authored papers with Putnam-Hornstein herself or with someone who wrote an algorithm with her. 

But even with the deck so stacked, in one case the ethics reviews offered some strong cautions–including suggesting that a key part of the program for which the algorithm would be used is unethical.  Though generally the reviews were favorable, the reviewers’ concerns were so serious that the agency that commissioned the reviews, the Allegheny County, Pa., Department of Human Services, went to great lengths to spin the results and direct readers toward the spin instead of the reviews themselves.  

The algorithm in question is the second of two in use in Allegheny County. 

The first, the Allegheny Family Screening Tool (AFST) stamps an invisible “scarlet number” risk score on every child who is the subject of a neglect allegation screened by the county’s child abuse hotline.  The higher the score, the greater the supposed risk.  Even though the ethics review for that one was co-authored by a faculty colleague of one of the creators of the algorithm, it cautioned that one reason AFST is ethical is that it does not attempt to stamp the scarlet number on every child at birth – something known as “universal-level risk stratification.” 


This is so Orwellian that even other family policing agencies can’t stomach it.  As noted in yesterday’s post, about Putnam-Hornstein’s work in California, the California Department of Social Services declared that 

The Department believes that “universal-level risk stratification” is unethical and has no intention to use it now or in the future. Identifying and proactively targeting services to families with no [child welfare services] involvement is a violation of families’ privacy and their rights to parent as they see fit. This would be an overreach in the roles and responsibilities of a government agency. 

So when Allegheny County decided that, ethics-be-damned, it wanted an algorithm to do exactly what appalled their counterparts in California, and exactly what their own prior ethics review implied would be unethical, the solution was obvious: Commission another ethics review! 

In fact, they commissioned two (or maybe three) – one of them from an ideological soulmate of the co-author of both Allegheny County algorithms -- Putnam-Hornstein.  

Sure enough, the county got much of what it wanted.  But the reviews displayed far more nuance than the county apparently expected, going into detail about serious problems with this approach, even as they claimed these obstacles could be overcome. 

So the county went into full spin mode.  In 2019, its first publication about the new algorithm, part of a program called “Hello Baby” merely declared that the ethics reviews existed, implying that Hello Baby got a seal of approval – but with no link to the documents themselves. 

A year later, the county put out its own summary of the ethics reviews. Although at last the actual reviews were posted online, there were no links from the county’s summary – and the reviews remain harder to find.  As we noted in our previous post, it’s sort of like the way Donald Trump’s attorney general, Willam Barr, handled the Mueller report.  In the case of the Allegheny County algorithm, the gap between the actual documents and the spin isn’t as wide – but it still tells an interesting story. 

So let’s look closely at the parts of those reviews that Allegheny County, and Putnam-Hornstein, probably least want you to notice. 

● The first thing to notice is that one of the two published reviews may never have been completed. It’s labeled a draft. 

● The second thing to notice is that the draft refers to itself as “one out of three perspectives from cross-disciplinary researchers looking different aspects of the risk-scoring system that Allegheny County plans to deploy.” [Emphasis added.]  But the county has only published two, and only ever refers to two.  What happened to the third? UPDATE, FEB. 22: Responding to an email query from NCCPR, Erin Dalton, director of the Allegheny County Department of Human Services, says there were only two ethics reviews. She said the draft may have been referring to a separate review of methodology and data science.

The other published ethics review strongly suggests a key feature of  Hello Baby – the fact that you’re in it unless you remember to opt-out – is unethical.  The review sets criteria for such a feature to be ethical. Hello Baby doesn’t meet the criteria. 

Selling Hello Baby 

There are two key selling points for Hello Baby: One, it’s supposedly a purely voluntary program, two, the vast troves of data will be used only for targeting prevention.  We’ll start with the second. 


Child abuse investigations are run by another division of the same agency that oversees Hello Baby.  Both divisions of this same agency are ultimately overseen by Erin Dalton, who is as nonchalant about the harm of foster care as she is fanatical in her desire to vacuum up data about poor people.  Nevertheless, Dalton’s agency publicly promises that child abuse investigators won’t see the Hello Baby risk scores or other data from that program.
 

One of the ethics reviewers, Prof. Michael Veale of University College, London, saw the problem. It turns out, there’s even a name for it: Function Creep.  He writes: 

One underlying anxiety concerning predictive systems in the public sector is that by virtue of being created for one task, they establish an infrastructure consisting of many aspects—including data, technology, expertise and culture—which might expand beyond its original scope into areas its original democratic and societal mandate did not permit. …

Some will be concerned that while [using the Hello Baby risk score only for prevention] might be the policy today, it might not be robust to change in the future. Similarly, those who might have lost trust in a public service more generally might not trust assurances that this inferred data is deleted or not passed onto other actors in the system. 

Veale suggests that the county come up with 

some legally binding declaration … delimiting the purposes of this system in advance to a sufficiently narrow scope and set of actors. This agreement would then serve as a mechanism that could be used to hold future uses of this model to account—at least insofar as it would have to be actively and ideally publicly removed before the purposes of a score or a model could change. 

This appears based on the naïve assumption that, were Allegheny County to want to use Hello Baby for child abuse investigations, the shame of having to go public might be a deterrent. 

On the contrary, when – not if, because it’s going to happen – the data are used to decide who to investigate as a potential child abuser and when to take their children it will be done with pride and fanfare.  Because here’s how it will happen: 

A three-year-old boy, call him Jason, is killed by his father.  Jason was “known to the system,” a previous allegation had been deemed unfounded.  Somebody leaks the fact that Jason’s father had a high risk score using Hello Baby.  The caseworker who investigated the father gives a tearful television interview in which she says: “If only I’d known that Hello Baby thought he was high risk, I never would have left the child in that home.” 

At that point three things happen: 

● A member of the Pennsylvania Legislature introduces “Jason’s Law,” a bill requiring that information from Hello Baby and anything else in the state like it be fully shared with child protective services.  He calls it “Jason’s Law” of course. 

● Erin Dalton or her successor calls a news conference and declares that the Allegheny County Department of Human Services isn’t about to wait for the legislature – they’re ordering full information sharing right now!  

● There are warnings that algorithms that predict terrible harm will come to a child, including AFST, have a record of being wrong more than 95% of the time  - potentially flooding the system with “false positives” that do enormous harm to innocent families and make it harder to find the few children in real danger.  The warnings are ignored. 

A pinky swear is not enough. 

Having raised an urgent concern, Veale comes up with a solution that has all the enforceability of a pinky swear – or maybe something more like this: 

 


There’s still another danger.  Anyone Hello Baby labels high-risk will be offered a series of services not offered to anyone else.  At the highest alleged level of risk, the program calls it “relentless engagement.”  Therefore, the service provider, who will be regularly coming into the home to engage relentlessly will know from day one that a high-tech algorithm has branded these parents high risk for abusing their children.  That service provider almost always will be a mandated reporter, required to report any suspicion of child abuse and neglect (and in Pennsylvania, the training curriculum is fanatical about urging reporters to report! Report! Report!) 

So even the other ethics reviewer, Deborah Daro, a Senior Research Fellow at Chapin Hall, and an ideological soulmate of Putnam-Hornstein expressed concern about this.  She writes: 

All home visitors report a proportion of their participants to child protective services. … The [Predictive Risk Model] gives service providers additional information on a family’s history that may alter the way workers interpret the conditions they do observe. Even if the exact details regarding a family’s history is [sic] not provided to program staff or other providers, the fact parents have been identified through the PRM as being at high-risk will convey a general profile of concerns. As such, key  implementation questions for the county to address include: 

• How might knowledge of a family’s prior history with the child welfare and justice systems impact a provider’s judgment regarding current relationships in the home and the ability of other caretakers (particularly the father) to appropriately care for the infant? 

• How does this knowledge impact how providers might interpret a mother’s actions – will they be less forgiving of minor concerns they observe? 

• Will knowledge of a family’s history increase the likelihood a provider will report the family to child welfare as a potential risk for maltreatment if the family drops out or refuses additional program services? … 

Heightened awareness of a family’s circumstances may create surveillance bias, resulting in a higher probability of a family being reported. Providers will know more about a family and will need to weigh this knowledge against a family’s willingness or reluctance to remain in the program. 

Notice Daro’s own bias here.  Allegheny County brags that all services provided under
Hello Baby are purely voluntary and families are free to drop out at any time.  But Daro seems to think exercising that right is still another reason for heightened suspicion.
 

Having raised the surveillance bias issue, Daro then cops out, suggesting the same failed solution that proponents of the child welfare surveillance state fall back on whenever the harm they do comes to light: We’ll fix it with more “training.” 

Defining “voluntary” 

Another key element of the selling of Hello Baby is the claim that it’s purely voluntary.  Technically yes, but you’d better be very sharp and wide awake during the first days and hours of your baby’s life to avoid being forced into the program – and isn’t everyone wide awake and able to absorb everything during that time? 

Because Hello Baby forces you in, unless you affirmatively opt out.  And you get only two chances to opt out.  The first chance is while you and your newborn are still in the hospital.  Amidst everyone else coming and going and handing you forms and discharge papers and God-knows-what else, you are given an information packet selling Hello Baby that also tells you how to opt out.  The second, and last, chance comes in the form of a postcard sent to your home – it’s not clear when, but presumably very soon after coming home with your baby.  You have to mail it back.  Miss those chances and Allegheny County has free reign to dig up all the electronic dirt on you that is called for in the algorithm and slap a risk score on you and your baby.  The score can follow you, and your child, forever. 


Oh, you can drop out of any services offered under the program at any time – though, as noted above that might prompt the service “provider” to call the child abuse hotline on you – but you never again get a chance to opt out of data collection or make them delete the data they’ve already gathered.
 

Here’s what Daro writes about when this approach, called “passive consent,” is ethical and when it is not: 

This approach is considered appropriate only if the intervention or strategy involves minimal risk to the participant and if obtaining written approval for the procedure is not practical or feasible. [Emphasis added.] It is not clear if this approach has already been approved by the county’s Institutional Review Board. If it has, then the approach has been judged appropriate in this instance. If it has not, the county will need to make the case as to why it is not asking parents to “opt in” for the screen. 

It is just as “practical and feasible” to presume someone is not in the program until they check a box saying they’re in, as it is to presume they’re in until they check a box that says they’re out.  So by Daro’s own criteria, this key aspect of Hello Baby is unethical. 

And the county’s response illustrates perfectly why putting all this data power in their hands is so dangerous. They respond that: 

The “passive consent” is only for running the PRM, which commits clients to nothing. 

After all, the county continues, families still don’t have to accept the “services.”  But, of course, allowing the county to run the PRM commits the family to surrendering vast amounts of personal data that can then be turned against them at any time. That’s hardly nothing. 

As for Daro’s stipulation that this aspect of Hello Baby should be approved by the county’s Institutional Review Board, the Allegheny County Department of Human Services replied: 

Allegheny County does not have an institutional review board.

Tuesday, March 17, 2020

The Pittsburgh approach to child welfare: Harass the mothers and stigmatize the children

According to a lawsuit, UPMC Magee-Women’s Hospital tested a pregnant woman 
for drugs without her consent. Then, pursuant to “practices, policies, 
and/or agreements” the hospital reported the false positive result 
to child protective services  – which then harassed the family. (Photo by Piotrus)

UPDATES, AUGUST 2, 2022: 

● A federal judge has refused to dismiss the claims against the University of Pittsburgh Medical Center.  As The Legal Intelligencer reported in May, the judge wrote that: 

“Averments set form in the amended complaint allege that [Allegheny County Children Youth and Families] used UPMC as a form of ‘cat’s paw’ to undertake inquiries and to administer drug tests on UPMC’s labor and delivery patients without their consent, and then to use reports of those ostensibly private and confidential medical inquiries and ‘provision and uncertain’ test results as a predicate to launch unwarranted and unconstitutional child abuse investigations.” 

The judge also rejected a claim by UMPC that boiled down to: Well, they didn't actually take away the kids, so what happened to the families is no big deal.

● Lawyers for the plaintiffs say that after the initial suit was filed, more mothers came forward with similar allegations. They are seeking class-action status. 

● Marc Cherna has retired.  Unfortunately he was replaced by Erin Dalton, who is even worse.

● A lawyer for the plaintiffs noted that the decision has implications well beyond Pittsburgh, telling the Legal Intelligencer: “Hospitals across the country need to take note of this decision and evaluate their practices.

● Although for years, media swooned over Pittsburgh's dystopian child welfare "predictive analytics" algorithm, known as AFST the first truly independent evaluation reveals that this algorithmic emperor has no clothes.

 

ONE CASE ILLUSTRATES A REMARKABLE NUMBER OF PROBLEMS COMMON TO CHILD WELFARE, INCLUDING:


 ● The fanatical desire to persecute certain mothers who smoke marijuana – or even are just falsely accused of smoking marijuana – no matter what that persecution does to their children.

● The harm done by journalists whose work has the effect of encouraging that kind of persecution.

● The campaign in Pennsylvania to make it even harder to expunge records of false allegations of child abuse.

● The harm done by the behavior of some doctors who specialize in detecting alleged child abuse.

● How the latest fad in child welfare, “predictive analytics” makes everything worse.
  

            They should have been among the most joyful days in the lives of Cherell Harrington and her family.  But starting just before she gave birth to her third child, late in 2017, the hospital where she gave birth and the child protective services agency in Allegheny County (metropolitan Pittsburgh) brought the worst kind of stress into the family’s life – they effectively threatened the family itself.

            Everyone ultimately agreed that Harrington did not abuse or neglect her newborn in any way.  Now she is suing the county and the hospital.  And it’s not just Cherell Harrington.  According to the lawsuit there is a “plan and/or agreement” between the county and the hospital to do this to new mothers. 

The practices involved allegedly are so common that attorney Margaret Cook of the Law Offices of Timothy P. O’Brien and lawyers from the American Civil Liberties Union of Pennsylvania are seeking class-action status for their suit.  (The ALCU of Pennsylvania’s legal director is a member of NCCPR’s volunteer Board of Directors.)

The hospital claims it just follows state law.  But even in Pennsylvania, where legislators take pride in passing ever more draconian laws so they can look tough on child abuse – no matter what that actually does to the children – there is no requirement to report Harrington and others like her to child protective services.

Even if Harrington wins her lawsuit, the nightmare may not end.  That’s because all this happened in Pittsburgh, home of the nation’s most advanced, Orwellian experiment in using “predictive analytics” in child welfare.

The algorithm used by the Allegheny County Department of Human Services and its Division of Children, Youth and Families (AC-CYF) doesn’t distinguish between true reports and false reports. So even though there were no grounds to report the mother at all, the mere fact that medical professionals reported her to the child welfare agency will raise the “risk score” for the child if the data are still in the system and anyone phones in some other false report against the parents.

 It’s not clear how long such information remains accessible.  Depending on how reports are classified and their disposition the information may be available for a year or for decades.  And there is a campaign underway to make things even worse.  So it’s possible that decades from now, the child himself may be labeled a higher risk for abusing his own children if anyone ever accuses him of abuse or neglect.
           

It all began with a drug test

           
            We don’t know why Magee-Women’s Hospital, a part of the University of Pittsburgh Medical Center (UPMC), decided to test Harrington for drugs.  We do know that Harrington is African-American – which makes such invasions of privacy more likely.

            Here’s what else we know, according to the lawsuit:

● Harrington never consented to the test. 

● The preliminary test came back positive, but only for marijuana.

● Such tests often are unreliable. Later, a more definitive test came back negative.

● The newborn tested negative for any drugs, including marijuana.

● Even were the tests positive, there is no evidence that marijuana use makes one a bad parent. Affluent parents even brag about it in Facebook groups.

Nevertheless, based simply on that one preliminary false positive test, the hospital reported Harrington to Allegheny County CYF.  And that false positive test was enough to launch an investigation.

Or was it just a “plan of safe care”?

This case illustrates that they’re really the same thing.  “Plan of safe care” is a term used in that repository for so much bad child welfare policy, the federal “Child Abuse Prevention and Treatment Act.”  Both CAPTA and Pennsylvania law require medical professionals to turn in new mothers to child protective services agencies if there is evidence the infant was “affected” by parental substance use.  Officially these are not necessarily child abuse reports.  But they are, in all but name.

In the case of Ms. Harrington, according to the lawsuit:

● There were no grounds to turn her in, since the test was a false positive – and her newborn tested negative.

● Allegheny County responded anyway, and the response was identical to a child abuse investigation.

So as you read on, and see what happened to this family, keep in mind that what happened here is exactly the kind of behavior at least one Pennsylvania seemed to want when she wrote this story.

What happened to the Harrington family


            According to the lawsuit:

Less than three days after giving birth to her son by caesarean section, an Allegheny County CYF caseworker entered Harrington’s room and told her that whenever the hospital reports any kind of positive drug test, the agency investigates.

            Two days after Harrington was discharged, the same caseworker showed up at the family home, inspected it from top to bottom, required Harrington and her husband to answer all sorts of personal questions and even questioned their 11-year-old daughter about her mother’s “use of addictive substances.”  The caseworker would go on to question the daughter’s school social worker.

            Then Harrington was coerced into a “counseling” session with a drug treatment program and forced to let the program test her for drugs again.  If she didn’t, she’d be reported to a judge for “failure to cooperate” and forced to go downtown for drug tests every month. 

            Harrington was coerced into signing all sorts of release forms  – but given no copies of what she signed.  According to the lawsuit “Ms. Harrington signed the documents because she feared that if she did not comply with [Allegheny County Children, Youth and Families] directives, her children would be removed from her custody.”


Even after the drug treatment program concluded no treatment was necessary, the harassment continued. The caseworker returned, inspected the home all over again and – again – questioned the Harringtons’ 11-year-old daughter.

Based solely on the false positive drug test the caseworker wrote that Harrington “cannot or will not control [her] behavior” and her “protective capacity” for her children was “diminished.”

A second case

           
            The lawsuit also describes what happened to another African-American mother, Deserae Cook, when she gave birth at another UPMC hospital.  Asked upon admission to the hospital if she’d ever used illegal drugs, Cook replied she’d smoked marijuana in the past but stopped when she found out she was pregnant.

            The hospital secretly tested her – without her consent -- and the test came back negative, confirming Cook’s account.  A drug test on the newborn also came back negative.

            Nevertheless, UPMC reported Cook to Allegheny County CYF – and her family, too, was put through a needless, traumatic investigation.

            All of this happened in spite of the fact that UPMC settled a lawsuit over the same practices in 2014.

            The current lawsuit sums up the routine behavior of UPMC and the Allegheny County Division of Children, Youth and Families (AC-CYF) this way:

UPMC and AC-CYF knew that a new mother’s self-report to a medical professional regarding prior drug use [or a new mother’s ‘unconfirmed positive’ drug test] constituted confidential medical information which UPMC was neither privileged nor legally required to disclose to AC-CYF absent evidence that her newborn was affected by illegal substance abuse or had withdrawal symptoms resulting from prenatal drug exposure. 
Nevertheless, in accordance with past practices, policies, and/or agreements between the Defendants, UPMC routinely, and in bad faith, reported this confidential medical information to AC-CYF and AC-CYF routinely accepted and acted on this confidential medical information to conduct unwarranted highly intrusive, humiliating, coercive and/or unconstitutional child abuse investigations of new mothers.

Why would a hospital be so cruel?


            Why would a big prestigious hospital inflict so much trauma on families? Perhaps they haven’t thought things through.

            UPMC is where Dr. Rachel Berger heads the “Child Advocacy Center.”  Berger co-authored a notorious article that formed the basis for an essay urging medical professionals to – literally – think less before reporting child abuse.  She also has gone out of her way to minimize the harm of foster care – in an essay co-authored by Erin Dalton, a deputy director of the Allegheny County Department of Human Services, where she reports to longtime DHS director Marc Cherna. 

            The fact that it now appears Cherna’s agency has some kind of special “practices, policies, and/or agreements” with Berger’s hospital concerning reports alleging substance use by new mothers is one more indication that Cherna should be deemed to have overstayed his welcome.

            The other indication is his role in creating his agency’s dystopian predictive analytics experiment.

The AFST factor


            All of this would be bad enough anywhere – but this kind of trauma done to overwhelmingly poor disproportionately nonwhite families is actually worse in Pittsburgh. That’s because Pittsburgh is a pioneer in using a “predictive analytics” algorithm whenever a family is the subject of a report alleging child neglect. 

            There are two versions of the Allegheny Family Screening Tool (AFST). The first version canvasses a vast trove of data (most of it collected on poor people) whenever CYF receives a report alleging child neglect.  It then coughs up a “risk score” which helps determine if CYF will investigate the call.  (All calls alleging abuse automatically must be investigated.  And now, it appears, Cherna and Berger have created another category of calls that must be investigated: All those that are part of some kind of arrangement between their respective institutions.)

            So the problem with AFST is not that it affected the initial reports on Harrington and Cook – the problem is what happens next time.

AFST counts reports workers later deem true, and reports they deem to be false. Past reports raise the risk score – period. And if those past reports come from medical professionals, they raise the risk score further.

The amount of time the county’s computers can gain access to such reports does depend in part on whether they are unfounded of not.  Unfounded reports are supposed to be expunged after no more than one year and 120 days.  So if, in fact, the report was labeled unfounded, the report might no longer be accessible to AFST.  But if the report was deemed "substantiated" Harrington and her family remain at risk of being labeled “high risk” and subjected to the whole traumatic process – or much worse – all over again.

            And there’s a move afoot to try to persuade the legislature to let counties keep even unfounded reports – perhaps for as long as they feel like it.  If that happens, then in the future, the danger to families such as the Harringtons could become vastly worse.

            There also is an even more dangerous version of AFST.  In this version, Cherna is trying to slap a risk score on every child – at birth.  Cherna promises this version will be used only to target “prevention.” But there is no way to stop him or a successor from changing her of his mind in the future.

So imagine what the score would be on a child such as the Harringtons’ infant if that version of AFST had been in effect when that child was born.  (In theory, this version is voluntary, but you have to affirmatively opt out and, as we’ve seen, that’s a risk families actually under investigation don’t dare take.)

            The reality of Pittsburgh child welfare under the rule of Marc Cherna was best summed up by Deserae Cook in an interview with the Associated Press:  She said her experience with the hospital and with Cherna’s agency

“…was like a kick in the stomach.  What’s the reasoning? It felt embarrassing and humiliating. It felt like they were trying to find something, trying to take our child away.”

Monday, June 18, 2018

Foster care apologists shouldn’t have nuclear weapons


The likely next leader of the child welfare system in Pittsburgh co-authored an appalling defense of foster care. She’s also considering stamping a “scarlet-number” predictive analytics risk score on children at birth.


Allegheny County's predictive analytics algorithm operates
like an invisible "scarlet number" that can harm a child for life.

I’ve written often about the dangers of the latest fad sweeping through child welfare, “predictive analytics.”  The idea is to use an algorithm to predict which parents supposedly are likely to abuse their children. Proponents say it reduces human bias. In fact, it magnifies human bias and gives it a false veneer of objectivity.  It is the nuclear weapon of child welfare.

So it’s no wonder that the most prominent proponents of predictive analytics also are those who are most fanatical about wanting to tear apart more families – and often those most deeply in denial about the problem of racism in child welfare. The predictive analytics cheerleading squad is led by people such as Elizabeth Bartholet and Richard Gelles, and Gelles' principal disciple, Naomi Schafer Riley.

Indeed, it is very hard to find anyone who supports this kind of computerized racial profiling who is both a real advocate of family preservation and does not run a child welfare system.

For those who do run such systems, the temptation can be irresistible.  That brings us to Pittsburgh and surrounding Allegheny County, Pa. That jurisdiction is the only one I know of where predictive analytics is up and running. (Similar efforts in Los Angeles and Illinois failed spectacularly.) The Pittsburgh experiment has been the subject of numerous gushy tributes from people like, well, Naomi Schaefer Riley – and one real critique, a chapter in Virginia Eubanks’ book, Automating Inequality, excerpted in Wired.

In Pittsburgh, when a call alleges abuse or neglect, an algorithm known as the Allegheny Family Screening Tool (AFST) mines a vast trove of data on the accused and coughs up a “risk score” for the child.  Like an invisible scarlet number, the child will wear that “risk score” for life – even if the original report on the parents was false.

So when the child  grows up, if she herself becomes the target of a child abuse report, that high scarlet number from her childhood will count against her, making her look like a “higher-risk” parent – because supposedly, she was at “high risk” as a child.

The argument made by backers AFST boils down to this: Our system is run by really good people. Marc Cherna, the longtime director of the county Department of Human Services, has a solid track record for curbing needless foster care.  He has promised to use predictive analytics only in limited and responsible ways. In other words, you can trust him with nukes.

To which I and others have replied:

What about Cherna’s successor, and his successor’s successor? Any system that depends for success on the benevolence of a single leader with near-absolute power is too dangerous for a free society. Most of those pushing for the use of systems like AFST are nothing like Marc Cherna. On the contrary, they tend to be those most enthused about taking away more children and using algorithms to accomplish it.

Cherna’s likely successor is his deputy, Erin Dalton. She runs DHS’ Office of Data Analysis, Research and Evaluation. 

Predictive analytics is the nuclear weapon of
child welfare - and child welfare can't control its nukes
As Eubanks reveals in her book, Dalton is the author of an email disclosing that the county is considering introducing “a second predictive model … [that] would be run on a daily or weekly basis on all babies born in Allegheny County the prior day or week.” Such a model already exists — indeed it’s one of the models the designers of AFST proposed to the county in the first place.

In other words, Dalton is seriously considering a plan to stamp that scarlet number on every child in the county – at birth.  Once again, the response is assurances that, were this to happen, it would only be used to target prevention, not removal.

But now there is new reason to question such reassurances, and, indeed, any confidence that Dalton will act with restraint.  While I certainly wouldn’t call her enthusiastic about taking away children, she recently has shown herself to be far too sanguine about the harmful effects of child removal.

That is clear from a commentary she co-authored for the journal Pediatrics. (Other co-authors include Dr. Rachel Berger who runs the "Child Advocacy Center" at the University of Pittsburgh Medical Center.) The commentary puts her firmly in the camp of foster-care apologists, the people who desperately look for scholarly straws to grasp in order to refute the mountain of evidence that foster care often does enormous harm to the children it is meant to help.  I expect this from Naomi Schaefer Riley.  I did not expect it from Erin Dalton.

Dalton’s bizarre commentary is an attack on an innocuous little study. The study demonstrated that teenage mothers who give birth while already in foster care are far more likely to have the infants taken from them than teenage mothers who are not in foster care.  Half the teen mothers in foster care had their own children placed by age two.

The study’s conclusion is hardly radical: “More and better services are required to support these mothers and to keep mothers and children together wherever possible.” 

Dalton & Co. have several complaints about the study.  The study looked at just one jurisdiction and it’s in Canada, no less – the province of Manitoba - so it may not be representative.  Many of the children were taken at birth, they write, so maybe they were placed in the same foster home as their mothers.  But the authors of the study believe this was the case for only “a few” of the children – and the gaps they found in placement rates persist all the way to age 2.

But the most alarming part of the critique from Dalton and her co-authors is this:

The outcome measure selected for this study (placement of the infant into foster care) is not the most important outcome for children and young mothers. Avoiding unnecessary foster care placement is a worthy goal, but placement of an infant, a young child, or an adolescent mother in foster care is not a bad outcome per se.

That is dangerously wrong. Foster care is, in fact, a bad outcome per se, and everyone in child welfare is supposed to know it.

Foster care may, in some limited circumstances, be a less bad outcome than leaving the child in her or his own home. For that very reason, there are a limited number of cases in which foster care placement is essential. 

But it is still a bad outcome.  As I discussed in my previous post about foster care apologists, foster care sometimes may be the least detrimental alternative – a concept that should, by now, be Social Work 101.  The fact that a child welfare leader who has child welfare’s equivalent of the nuclear codes doesn’t get this is deeply disturbing.

And it gets worse.  Like Riley, Dalton and her colleagues desperately seek something to refute the massive, comprehensive studies showing that in typical cases, children left in their own homes fare better even than comparably-maltreated children placed in foster care.  The studies, by MIT Professor Joseph Doyle, looked at what actually happened to more than 15,000 children, following them all the way into their late teen years and sometimes into young adulthood.

A reminder of what MIT Prof. Joseph Doyle's massive studies actually found

Dalton & Co. ignore those studies. Instead, they write:

The authors of several longitudinal studies suggest that under certain circumstances, foster care may result in better long-term outcomes than the outcomes of remaining with biological parents.

Leaving aside the use of the demeaning, dehumanizing term “biological parents” – suggesting people who are no more important to a child than a test tube, and leaving aside the fact that three studies barely qualify as “several” – the “longitudinal studies” cited are extremely weak.

As with the studies cited by Riley they depend not on what actually happened to the young people, as the Doyle studies do, but on subjective evaluations of children’s behavior, including evaluations by caretakers – creating significant potential for bias, or simply honest error.

One of the longitudinal studies didn’t have much longitude – it measured changes in three groups of children after only six months – and the three groups had a total of only 92 children.  The study authors themselves call it a “short-term follow-up,” yet Dalton and her co-authors try to use it to justify claims about “long-term outcomes.” 

Another study, again using subjective evaluations, involved only 30 children – from Israel. So now a study from Israel is cited by the same people who question the validity of relying on a study from Canada.

The third study also was a subjective assessment. It appears that the assessment took place only once, so it is unclear whether this study was “longitudinal” at all.  This study found that “maltreated children who remain with their birth parents have mental health problems at the same rate as maltreated children who are placed.” [Emphasis added.] Not exactly a ringing endorsement of foster care.

That is the “evidence” Dalton and her co-authors cite to justify this claim:

The assumption that reducing foster care placements always improves outcomes is not necessarily true and may be used to support policies that are not in the best interests of children.

No one claims that reducing foster care placements always improves outcomes. But it almost always   And that makes it entirely reasonable to worry that outcomes are worse for children of teen mothers when those children are placed in foster care.
does.

As for “policies that are not in the best interests of children” what policies exactly do Dalton and her coauthors have in mind?  They never say.  The study they criticize calls only for “more and better services.” Surely they don’t want fewer and worse services.

And the very use of the phrase “best interests of the children” is another indication that child welfare in general and an agency Dalton is likely to run in particular are not ready for something as powerful and easy to misuse as predictive analytics.

I discuss why this seemingly benevolent and inarguable phrase is so harmful in this post, dealing with Maine’s governor, Paul LePage a kind of Donald Trump mini-me. Suffice it to say here that it is a phrase filled with hubris. It gives free reign to the biases of middle-class professionals – and the algorithms they create. The alternative construct, least detrimental alternative, was suggested in part for that very reason.

I expect no better from Paul LePage.  I used to expect far better from the system in Allegheny County, Pa.

The nuclear weapon of predictive analytics is far too dangerous to entrust to the field of child welfare. What we need to demand from child welfare is irreversible, verifiable denuclearization.

Tuesday, April 10, 2018

Predictive analytics in child welfare: The harm to children when “the stuff stays in the system.”

Marc Cherna is was once one of the best human services leaders in America. But even he shouldn't have the power to be the Mark Zuckerberg of child welfare.


Today, across America and much of the world, the big story will be Facebook CEO Mark Zuckerberg testifying before Congress about how personal data from millions of Americans wound up in the hands of Cambridge Analytica. Although the data breach is outrageous, at least those data were originally uploaded voluntarily – Facebook users have the right to not share their data in the first place.

In Pittsburgh, Pa. poor people have NO. SUCH. CHOICE. They are forced to surrender their data.  And their data can be used to decide whether to take away their children. I’ve written about the implications here and here.  Another example comes courtesy of a Pennsylvania dentist:


Last week, I published a post about a dentist in Pennsylvania who sent threatening form letters to some of his patients. The patients had dared to not schedule follow-up appointments when the dentist thought they should. In the case which brought this to public attention, the patient didn’t like the dental practice and had made clear her intention to go elsewhere.

The letters threaten to report patients who don’t schedule follow up appointments to child protective services.  According to at least one news account, the dentist acknowledges following through on the threat 17 times last year.

The earlier post discusses the potentially devastating consequences for children. If the report is “screened in” – as is likely because it came from a medical professional – it means, at a minimum, a highly intrusive investigation that could do lasting emotional harm to the children.  That harm can’t be undone if the child welfare agency realizes the report was false.

The mere existence of a false report in a child welfare agency file can increase the chances that, if there’s another false report, the new report will be wrongly substantiated – because of a bizarre notion in child welfare that enough false reports are bound to equal a true report. This increases the odds that the children will be consigned to the chaos of foster care.

And, of course, all those false reports steal time caseworkers should be spending finding children in real danger.

The only good news here is that this dentist practices in eastern Pennsylvania.  At least in that part of the state a child abuse “hotline” operator deciding if a case should be “screened-in” can check the file and, seeing a previous allegation based solely on a missed dental appointment, might realize how absurd it was.

Were this dentist at the other end of the state, in Allegheny County (metropolitan Pittsburgh) it could be far worse.

Automating absurdity


That’s because Allegheny County is home to the nation’s most advanced experiment in using “predictive analytics” to decide when to investigate if a child is in danger of being abused or neglected. 

Whenever the county receives a report alleging that a child is being abused or neglected, an algorithm known as the Allegheny Family Screening Tool (AFST) uses more than 100 different data points to spit out a secret “risk score” between 1 and 20 -- an invisible “scarlet number” that tells the county how likely it is that the child is, in fact being abused or neglected or is at risk of abuse or neglect.  The higher the number the more likely the report will be “screened in” and investigators will be sent out.

Though the investigators don’t know the risk score, they do know that a high risk score is why they are being sent out in the first place.

Prof. Virginia Eubanks offers a devastating critique of AFST in her book, Automating Inequality. Part of that chapter is excerpted in Wired magazine. I discussed her findings in detail in Youth Today and I discussed the ethically-challenged “ethics review” used to justify AFST on this blog, so I would repeat that overall critique here.



But the case of the disgruntled dentist prompts me to focus on one particular piece of the Allegheny algorithm: The mere fact that a previous report exists – regardless of how stupid that report may have been – raises the risk score.  No human being intervenes first to see if the report had any legitimacy.

In fact, it appears that the Allegheny County algorithm even counts previous reports that were considered so absurd they were screened out with no investigation at all. 

So suppose, hypothetically, an Allegheny County dentist reported someone just for missing a follow-up appointment.  This was considered too absurd even to investigate.  A few months later someone else calls the child abuse hotline about the same family.  The existence of that previous, uninvestigated report from the dentist raises the risk score.  So now, the child has a higher scarlet number.

Making it even worse: In the Allegheny algorithm still another factor increasing the risk score is if a report, no matter how absurd, was made by a medical professional – such as a dentist.

And Cherna is pretty fanatical about keeping and using data – regardless of the data’s reliability.  This is clear in what he told Prof. Eubanks about a related issue: reports that are legally allowed to be kept for far longer, those in which caseworkers “substantiate” the allegation. In Pennsylvania, as in most states, that means only that the caseworker decides it is slightly more likely than not that abuse or neglect occurred.

In cases alleging actual abuse, there is a long, almost impossible appeals process.  And in a bizarre twist of Pennsylvania law, in less serious cases there is no appeals mechanism at all.  In such cases, the county keeps a record of the case until the child who was the subject of the report turns 23. 

This is where we find out how Marc Cherna feels about keeping junk reports and using them in his algorithm.  He told Eubanks: “The stuff stays in the system.” And, of course, Cherna said what those who hold onto junk reports of child abuse always say. In effect, enough false reports are bound to equal a true report. Said Cherna: “A lot of times where there’s smoke there’s fire.”

But a lot more often there’s someone who’s just blowing smoke.  So let’s just be grateful that a certain dentist hasn’t opened a branch office in Pittsburgh.

And, as we watch what's happening in Congress today, let’s also remember one thing more.  Marc Cherna is now the Mark Zuckerberg of child welfare. Both Zuckerberg and Cherna amass huge quantities of data. Then they decide what will happen to those data.  There are two key differences: Marc Cherna isn’t doing it to make money. In fact both his intentions, and his track record as a child welfare leader are excellent.  On the other hand, Facebook can’t use data to take away your children. Marc Cherna’s agency can.

UPDATE: 11:45 am: In an earlier version of this post, I asked whether inclusion of certain elements in AFST violated Pennsylvania law concerning expungement of records involving unfounded reports. This was based on a list of factors included in AFST. I noted that I had e-mailed Cherna and his deputy Erin Dalton on April 5 and received no response.  I have just received a response from Cherna, in which he makes clear that, in fact, AFST does NOT include any information that legally should be expunged.  Therefore, I have deleted that portion of the original post.