Thursday, October 19, 2017

New York City bill would be a small step toward curbing computerized racial profiling in child welfare

It is the latest fad in child welfare: Use a computer algorithm that supposedly can predict who will abuse a child. The term commonly used in the field is “predictive analytics.” A more accurate term would be “computerized racial profiling.”

Child welfare is not the only field to embrace predictive analytics. It’s already in use – and proven to be racially biased – in law enforcement, for example. A few members of the New York City Council, including the chair of the council’s Technology Committee, James Vacca, see the danger. They’ve introduced legislation that would, at least, bring a small measure of transparency to how the city uses predictive analytics. Columbia Journalism Review reports that the committee’s hearing on the bill this week was among the best attended in recent history.

The bill comes just in time.  The former head of the city’s child welfare agency, Gladys Carrion, expressed doubts about predictive analytics. But she “retired” as Commissioner of New York’s Administration for Children’s Services under pressure after a high-profile child fatality. Her replacement, David Hansell, gives every indication of embracing this new and dangerous fad. That is likely to make worse everything The New York Times found in its recent story about foster care as the new Jane Crow.

The dangers of computerized racial profiling


If predictive analytics worked as well as proponents say it does, Hillary Clinton would be president. 
Remember how the predictive analytics algorithms
said she would be president?
The algorithms from organizations such as FiveThirtyEight and the Times kept telling us she would win.

But, as a Times analysis (published after the election) points out, there is reason for concern about predictive analytics that goes far beyond that one “yuuuge” failure. And those concerns should extend to child welfare.

ProPublica reports that predictive analytics already has gone terribly wrong in criminal justice, falsely flagging Black defendants as future criminals and underestimating risk if the defendant is white.  A new analysis of ProPublica’s data confirmed their findings.

●In child welfare, a New Zealand experiment in predictive analytics touted as a great success wrongly predicted child abuse more than half the time.

● In Los Angeles County, another experiment was hailed as a huge success in spite of a “false positive” rate of more than 95 percent.  And that experiment was conducted by the private, for-profit software company that wanted to sell its wares to the county.

● The same company is developing a new approach in Florida. This one targets poor people. It compares birth records to three other databases: child welfare system involvement, public assistance and “mothers who had been involved in the state’s home visiting program.”

So listen-up “at-risk” new mothers: In the world of predictive analytics, the fact that you reached out for help when you needed it and accepted assistance on how to be better parents isn’t a sign of strength – it’s a reason to consider you suspect, and make it more likely that your children will be taken away.
Philip Browning
None of this has curbed the enthusiasm of predictive analytics fans. Indeed, Hansell has brought in as a consultant a key backer of the L.A. experiment, the head of the Los Angeles County child welfare agency at the time, Philip Browning.

The campaign for predictive analytics is led largely, though not exclusively, by the field’s worst extremists – those who have been most fanatical about advocating a massive increase in the number of children torn from everyone they know and love and consigned to the chaos of foster care – and also by those most deeply “in denial” when it comes to the problem of racial bias in child welfare.

Some predictive analytics boosters have even argued that “prenatal risk assessments could be used to identify children at risk of maltreatment while still in the womb.” Though these researchers argue that such targeting should be used in order to provide help to the mothers, that’s not how child welfare works in the real world.

“Yes, it’s Big Brother,” said another predictive analytics enthusiast. “But we have to have our eyes open to the potential of this model.”

The real potential of this model was aptly summed up by Yung-Mi Lee, a supervising attorney in the Criminal Defense Practice at Brooklyn Defender Services at the hearing on the New York City bill. Said Lee:

At worst, such tools provide a veneer of color- and class-blind objectivity while exacerbating the racial and economic discrimination and other inequalities in law enforcement practices and criminal and civil penalties.

If anything, the problem is worse in child welfare, where the rest of the process – the records and, in most states, even the court hearings, also are secret.

The New York bill


The New York bill doesn’t do a lot – but it would be a small step forward.

First, it would require that algorithms used “for the purposes of targeting services to persons, imposing penalties upon persons or policing,” be public. That would make it possible for experts to test for bias. It also might eliminate private for-profit companies from pushing their products, since presumably they wouldn’t want their secret formulas made public.

Second, the bill would allow any New Yorker to “plug in” her or his own data and see the result.

So, for example, if the child welfare agency starts using predictive analytics, you could plug in your age, your race, your income level, and whether you’ve reached out for help and see if the agency thinks you’re high risk to abuse your child.

Unfortunately, the bill provides no redress if you find out that you are, indeed, falsely labeled “high risk” because of factors such as race or income (or factors such as whether you’ve changed homes or schools a lot, which are, in reality, measures of race and income).

And transparency alone is not enough.  Los Angeles County quietly decided not to move forward with the secret software from a private company – the one with the 95 percent false positive rate. The state of California is now embarking on what it promises will be a transparent algorithm and an open process to develop it.  But that alone doesn’t eliminate bias, it only creates the potential to reduce bias.


NCCPR has a detailed discussion of predictive analytics in child welfare in our publication Big Data is Watching You.