Tuesday, January 25, 2022

Garrett Therolf and the Magic Algorithm!


He co-authors a “news analysis” for the Los Angeles Times that sounds like a pitch by a company selling “predictive analytics” software for child welfare.  There’s also another cheap shot at family preservation and something that sure sounds like a dog-whistle.

Second of two parts. Read part one here.

Yesterday’s post to this blog reviewed the bad journalism practiced by former Los Angeles Times reporter, and current contributor, Garrett Therolf.  I noted that other journalists have accused him of repeated misrepresentations and, on one occasion, making up a quote.  The post talked about Therolf’s profound discomfort with any suggestion that there is systemic racial bias in child welfare. 

I said I believe this poor journalism contributes to the fact that Los Angeles County tears apart families at the second-highest rate among America’s biggest cities and their surrounding counties.  And I noted how he stacked the deck in a story about racial bias in child welfare. As Therolf portrayed it in a 2017 story, white people “marshal data,” Black people want to rely on anecdote and “folkways.” 

That post discussed what Therolf said during a January 20 “Ask the Reporters” video event presented by the Times.  But Therolf and a current Times reporter, Matt Hamilton, also wrote a “news analysis” for the Times headlined Anthony, Noah, Gabriel and beyond: How to fix L.A. County DCFS. 

Though called a news analysis, it was actually a sales pitch for a child welfare fad that is much beloved by those wedded to a take-the-child-and-run approach to child welfare: predictive analytics.  This is an approach in which a computer algorithm mines vast amounts of data – especially data about poor people – and tells the family policing agency whether a case is high-risk.  It amounts to computerized racial profiling.  

It was tried first in criminal justice – and proven to be racially biased.  Yet Therolf and Hamilton present it not just as a solution to the problems plaguing the Los Angeles Department of Children and Family Services but the only viable solution.  They regurgitate favorite talking points of Emily Putnam-Hornstein, America’s foremost evangelist for predictive analytics in child welfare; someone whose own extremism and penchant for deriding the work of Black people is documented here. 

Analyzing the “analysis” 

The story is a great illustration of Therolf’s general approach. So in this post, I will go through parts of the story paragraph-by-paragraph, (excerpts are in italics) starting with that headline, which references the three horror story cases Therolf focuses on, to the near exclusion of all else: 

News Analysis: Anthony, Noah, Gabriel and beyond: How to fix L.A. County DCFS 

One could as easily have done a story headlined: “Los Angeles tears apart families at one of the highest rates in America’s largest urban areas – but children keep dying: How to fix L.A. County DCFS.”  But Therolf has shown no interest in L.A.’s outlier status.  While he has pointed out that Los Angeles has failed to prevent horror story cases for decades, he has not seen fit to remind readers that the rush to remove more children has done nothing to make them safer. 

In the long, troubled history of L.A. County child abuse cases, certain names stand out as avatars of how the system can go terribly awry. Anthony Avalos. Gabriel Fernandez. Noah Cuatro. 

Viola Vanclief, Joseph Chacón and Andreas F. could just as easily have become such avatars
– but Viola and Joseph died in foster care.  Andreas was allegedly tortured and beaten into a coma by his foster mother.  They didn’t become “avatars” because the way they were hurt doesn’t support Therolf’s “master narrative.” The Love family  (whose story does not appear to have been covered in the Times even once) and the mother who is the subject of this commentary in WitnessLA also would make excellent avatars of system failure. 
 

In fact, all of these children deserve to “stand out as avatars of how the system can go terribly awry.”  Therolf and his colleagues have chosen to emphasize some and pay far less attention to others. 

Is the racial justice movement really too influential? 

But since the spring of 2020, another name has wielded outsize influence over national perspectives and policies related to child welfare, and energized activists to push for sweeping reforms: George Floyd. 

Notice the use of “outsize” influence as opposed to “strong influence” or just “influence.”  Outsize can simply mean large, but it also can mean “exaggerated or extravagant in size or degree.” – as in: Beware! People are now way too concerned about not taking away Black children and that is putting them in danger.  

If anything, the impact of America’s racial justice reckoning on child welfare has been undersized, consisting largely of foster care agencies slapping Black Lives Matter statements on their websites while continuing business as usual. 


Indeed, it is a testament to the double standards that so many journalists apply when the issue is child welfare that in the midst of a racial justice reckoning, so many reporters bought into a racially biased, and now widely-debunked myth: the one that claimed that, due to COVID-19, in the absence of all those overwhelmingly middle-class disproportionately white “mandated reporters” keeping their “eyes” constantly on children who are neither, their parents would unleash on their own children a “pandemic of child abuse.”  Garrett Therolf, who in his current job writes for outlets besides the Times, jumped right onto that bandwagon.
 

Similarly, while the Los Angeles Times did a superb job confronting its own racism in many other fields, it neglected to scrutinize its child welfare coverage.  Perhaps the Times’ new editor, Kevin Merida, will remedy that. 

The murder of the Black Minneapolis resident by a police officer in May 2020 set off a national soul-searching over the country’s racist past and the prejudices that still haunt its institutions. In L.A. County, that process has focused intense scrutiny on what a number of racial justice advocates and elected officials say is an implicit bias that may make some Department of Children and Family Services workers more prone to regard poor families and parents of color as unfit to raise their children. 

Watch out for those bad apples 

Notice the “few bad apples”-type framing here.  Some workers may be biased.  As opposed to the idea that systemic racism poisons the entire system.  Therolf isn’t required to agree with that, but he has an ethical obligation to note the existence of that critique and explore it.  But as we’ve seen, Therolf has a record of dismissing that idea. 

In 2020, three-quarters of children removed from their homes in L.A. County were Latino or Black, according to a motion — authored by Supervisor Holly Mitchell and passed in July by the Board of Supervisors — to begin implementing a controversial pilot project called “blind removal.” 

The program, first adopted in Nassau County on Long Island in New York, redacts all race and race-related factors from the dossiers used by social workers and supervisors in determining child welfare cases. And it is gaining popularity, despite critics who say that it has shown insufficient evidence of its efficacy and that it adds one more task to an overtaxed workforce. 

Let’s take the last point first. What overtaxes the workforce is a flood of false reports, trivial cases, cases in which family poverty is confused with neglect and needless removals of children by workers terrified of being the Los Angeles Times’ next target if they leave a child in a home and something goes wrong.  So the notion that one small step to try to curb racial bias should be abandoned because the workforce is overtaxed is one more indication of Therolf’s own bias. 

Therolf is partly right about criticism of the program’s efficacy – criticism from his pal Putnam-Hornstein.  (He does not mention other criticism, that the change doesn’t go far enough in uprooting institutional racism.)  But that hardly makes the whole program “controversial.”  Even the revisionist critique shows an overall decline in removals of Black children.  In fact, the data may indicate that the process contributes to a decline in removals of all children – perhaps because the process also eliminates data suggesting whether or not a family is poor. 

But the best way to find answers is to create a pilot project to test the practice.  That is exactly what the Board of Supervisors has ordered. 

But while officials scramble to address these race-related concerns, other child welfare experts assert that another, relatively new methodology using machine learning and algorithms is more likely to yield race-neutral and reliable results that, among other benefits, will enable social workers to accurately identify incidents of child abuse at an early stage and move swiftly to intervene.  

Notice that this method gets a much friendlier introduction. A method widely criticized as magnifying racial bias is presented as the magic cure. 

Rather than relying on caseworkers’ limited ability to weigh a family’s full recorded case history — due to limited time and cumbersome technology — experts had urged the county to partially automate risk analysis with a new generation of predictive analytics tools to scan and evaluate hundreds of known variables regarding families, including prior hotline calls and the child’s age when the first hotline call was received. 

Stop and ask yourself: If you were writing promotional copy for a company selling predictive analytics software, would it have been any different from that?  

Better solutions 

One could, in fact, solve the limited time problem by curbing the deluge of false reports, trivial cases and poverty cases.  This could be done by things like eliminating mandatory reporting  -which has been found, in studies Therolf is unlikely to tell you about, to overload the system and drive families away from seeking help.  

The deluge of false reports is due in part to CYA referrals by mandated reporters terrified of what will happen to them - including what the Times will do to them – if they don’t report and something goes wrong.  As for cumbersome technology, how about fixing it, instead of building a whole new technology on top of it? 

The use of such tools to predict which children are at greatest risk has attracted controversy because of the chance that they might exacerbate racial disparities in child welfare. That’s because Black and Latino families tend to interact more frequently with entities such as public hospitals and mandated reporters who generate the data that are used to train the algorithm on how to detect risk. 

Actually, there’s far more reason than that. The algorithm in Pittsburgh, for example, which was co-authored by Putnam-Hornstein and for which Therolf has shown particular fondness, relies heavily on databases that sweep up data only on poor people. It also relies heavily on past involvement in the family regulation system itself. (By the way, Putnam-Hornstein hates the phrase “family regulation system.”) That makes the computer-generated risk score more like a self-fulfilling prophecy than an actual prediction.  Then there’s the fact that, as noted earlier, the same approach has been proven to be biased when applied in criminal justice. 

Those pesky civil libertarians! 

Pushback from the American Civil Liberties Union and others had stalled the development of the program for years, contributing to the decision by Cagle’s predecessor, Philip Browning, to retire in frustration. 

Darn those pesky civil libertarians!  But, wait, what did the ACLU actually say?   You don’t suppose they have an in-depth critique that they presented at a recent virtual conference, do they? Isn’t this where real journalists quote the ACLU in order to explain why there might be a problem?  But no; a hallmark of Therolf’s “reporting” is to minimize dissent from his own point of view, confining it to a brief paraphrase or not mentioning it at all. 

Oh, one other thing Therolf neglects to mention: Los Angeles County tried a predictive analytics algorithm during Browning’s tenure – and it failed spectacularly. The “false positive” rate was more than 95%. Much the same happened in Illinois.  What L.A. is trying now is supposedly a new, improved algorithm.  But it’s co-authored by Putnam-Hornstein – and her Pittsburgh algorithm has much the same problem. 

But [Browning’s successor, Bobby] Cagle gained the majority support of L.A. County’s supervisors that had eluded Browning, and the tool was piloted last summer to help flag children like Noah who may be at the highest risk. (Cagle resigned in November, as DCFS faced mounting criticism over a series of fatalities and abuse of children under the agency’s care.) 

A lead designer of predictive analytics, Emily Putnam-Hornstein at the University of North Carolina, emphasizes that the tool is designed to be advisory and can be easily set aside by caseworkers if their investigation verifies that no significant safety threats exist. 

Easily set aside? You’re kidding, right?  Again, imagine what would happen to the caseworker who overrides the algorithm and something goes wrong.  Here’s what would happen: A Los Angeles Times  headline that would say something like: “Caseworker ignored ‘high risk’ warning about [name of child] weeks before he died." 


Also, as Therolf describes it, the override would come only after the incredibly intrusive, traumatic investigation called for by the algorithm – and algorithm which, when it comes to predicting the kind of horror stories on which Therolf’s reporting thrives, will almost always be wrong.
 

Unfortunately, it seems that the chair of the Board of Supervisors, Holly Mitchell, buys this.  During a Times-sponsored video presentation, she actually said she knows there is an inherent potential bias in the algorithm but she was counting on caseworkers to both overcome their own biases and counter any bias in the algorithm. 

In a slide show presentation explaining the need for tools like L.A. County’s, Putnam-Hornstein wrote that her work stems from a “growing appreciation that current tools are inadequate, clinicians are poor at weighting factors (and time is scarce!).” An independent evaluation team will ultimately decide whether the tool helps. 

I’ve already addressed the “time is scarce” argument.  The “current tools are inadequate” argument is a Putnam-Hornstein favorite.  The problem isn’t that she’s wrong, it’s that she’s setting up a false choice: Either use the cruddy system you have now or the cruddy alternative I’m proposing. That’s something I’ve addressed before in the context of Los Angeles.  

There are other ways to go: Abolish mandatory reporting, allowing mandatory reporters to become mandatory supporters (Putnam-Hornstein hates that phrase, too), narrow neglect laws to reduce the confusion of poverty with neglect, and provide high-quality defense for families, just for starters.  All that would allow human caseworkers plenty of time to learn how to do the job better and with less bias. 

As for the claim about an independent evaluation, that needs to be checked closely, in light of the way a so-called independent ethics review was handled for Putnam-Hornstein’s Pittsburgh algorithm. 

Scapegoating family preservation 

And now, behold how, once again as he has so many times before, Therolf attempts to scapegoat any effort to keep families together: 

One near-universal assumption among social workers is that children belong with their own families in the absence of serious safety threats. 

That is near-universal rhetoric – but the data show that caseworkers in Los Angeles are far more prone to tear apart families than their counterparts in other large metropolitan areas.  

But critics contend that this can lead caseworkers to reflexively declare success whenever a child remains with their parents, without taking full account of whether the child could be at risk. 

Which critics – besides you, Garrett?  That sounds a lot like Donald Trump, who was fond of saying “many people are saying” or “a lot of people are saying” when he meant himself.  Also: 

● As is so common with Therolf’s stories, this implies that holding children in foster care is safe and only returning a child home can place a child “at risk.”  That is at odds with study after study showing high rates of abuse in foster care. 

● And where are the data to support the claim about placing children at risk?  Oh, wait, there’s one line by one lawyer in one report - from 2012: 

“DCFS should change its messaging from Do Not Detain/Keep The Numbers Down,” wrote the Board of Supervisors’ special counsel in a secret internal 2012 report. 

But then, as now, the numbers weren’t down. Los Angeles was, and remains, an outlier in child removal.  Therolf provides no link to the “secret internal 2012 report” and no documentation that any such messaging existed.  If there has ever been such messaging, the data show that the rank-and-file sure didn’t get the message. 

Yet workers in the cases of Noah Cuatro, Gabriel Fernandez and others went on to finalize decisions to leave children in dangerous homes without even reading their own agency’s case file. 

Note the leap here: The logical assumption when workers make decisions without reading the file is that they didn’t have time to read the file – as Therolf himself suggests earlier in this same story.  But if you’re pushing a particular point-of-view, you simply leap to the conclusion that they didn’t read the file because of some fanatical devotion to keeping families together. 

Research has shown that people who are not trained to assess and investigate child abuse are more successful than DCFS workers at predicting the most serious harms to children. Researchers have found that the number of calls made by the public to child protection hotlines was a better indicator of deadly risk than the conclusions of caseworker investigations. 

Inference peddling 

Here, as throughout the story, Therolf wants us to take his word for whatever inferences he chooses to make.  There are no links to actual documents, much less to research.  No way to see the context or to fact-check his claims. 

In fact, it appears that Therolf may be making a leap from a study – by Putnam-Hornstein, of course, showing that children reported to child abuse hotlines are more likely to die than children not reported to hotlines.  

But, as noted in yesterday’s post, of all children who are, in fact, the subject of hotline calls in the course of a year 99.998% do not die.  That makes the horror stories no less horrible and no less a cause for action.  But they are needles in a huge haystack.  

So Putnam-Hornstein’s findings tell us only that while the risk of child abuse death among children who are subjects of hotline calls is infinitesimal, the risk to children who are not subjects of such calls is even smaller.  If you then make the number of hotline calls a huge red flag in an algorithm, you will traumatize vast numbers of children in innocent families, encourage malicious false reports, and further inundate caseworkers, leaving them even less time to find children in real danger and less time to “read their own agency’s case file” – because, remember, “time is scarce!” 

Indeed, it is precisely because the horror stories we think of when we hear the words “child abuse” are so rare that predictive analytics experiments crashed and burned in Illinois – and Los Angeles. 

Over the last decade of relative paralysis by the Board of Supervisors to implement effective reforms, hundreds of children whose cases cried out for help have died at the hands of their caregivers. 

Actually, it wasn’t paralysis.  The constant outcry from the Board of Supervisors – and Therolf’s stories -- led Los Angeles County to take far too many children and overload the system – so of course child abuse deaths did not stop. 

Can’t you hear the whistle blowing? 

Therolf goes on to list a series of factors he says explain why child abuse deaths occur disproportionately in one part of the county, the Antelope Valley.  Then comes something that sure sounds like a dog-whistle: 

Complicating matters further is that the region has one of the highest shares of Black residents, so any misstep exacerbates concerns about racial justice. 

That sounds like a more genteel version of what you’d hear on Fox News: They won’t take Black kids in danger because the politically correct woke mob will get them! 

Now, as the county prepares to select its next DCFS director, that person will have to confront a central problem that has undermined … so many …  initiatives of the past: trust. 

At a recent forum held by Fordham’s School of Law to discuss Los Angeles County’s use of machine learning, Ron Richter, the former director of New York City’s child welfare system, said that even “when we talk about a tool that may help reduce disproportionality and family regulation, sincere issues of trust surface, especially for those of us who have witnessed firsthand what child welfare looks like on the ground.” 

That’s also true, Richter added, for “those who have been historically judged by this system and feel strongly that many children and families have been misjudged.” 

But Richter, who now runs a large private foster care agency, wasn’t the only speaker at the forum.  One of the others was Aaron Horowitz, chief data scientist for the American Civil Liberties Union.  So you see, Therolf knows full well why the ACLU disagrees with Richter and does not see this as “a tool that may help reduce disproportionality and family regulation” – but he doesn’t seem to want L.A. Times readers to know it.  He allows no one to actually make the case that predictive analytics can magnify racial bias.  Though Therolf won’t tell you, you can watch Horowitz’s presentation, and the entire Fordham event here. 

Note also the condescension.  What Richter is really saying is much like what Therolf himself suggested in that 2017 article: That we white people are using science, and those Black people are just scared because they rely on anecdote and “folkways.” 

As long as this is how the Los Angeles Times continues to cover child welfare, the commendable reckoning it undertook last September, when it looked at its own track record covering race, is incomplete.