It seems like all over the place these days, we’re hearing about artificial intelligence. We’re hearing about it in the context of manufacturing (which jobs will survive automation?), transportation (when will self-driving cars become a widespread thing?), and healthcare (will it transform radiology?) — we’re hearing about it everywhere.
The introduction of AI to our everyday lives raises policy questions around how to support the millions of workers whose jobs will be changed or eliminated. But it also raises questions about how to prevent potential discrimination in our legal system, which has already adopted certain machine learning tools. It’s a problem made even more difficult by the public’s lack of basic understanding of artificial technologies, and by companies’ unwillingness to be transparent about their algorithms.
Last month, Preet was joined on Stay Tuned by Nita Farahany, a professor at Duke School of Law and an expert on the ethical implications of emerging technologies. Here’s how she described the basic building blocks of AI: “A starting place that is important for artificial intelligence is that [behind] most artificial technology is machine learning algorithms. And these are basically just kind of big software programs that take large data sets and then are able to find patterns and make inferences from those patterns that we might not easily be able to make, or that would be very difficult to do without these algorithms. But if the data isn’t very good or if it’s biased, you may end up with very biased patterns and results.”
Farahany spoke specifically about the use of AI in the courtroom, and in particular, its role in informing bail and sentencing decisions. “There are systems like COMPAS, which is a system that’s been used in the United States for making decisions to assist — to give risk scores — to judges. Or in New Jersey, there’s a system that is being used for bail determination.”
COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, is a popular AI software that determines “risk scores” for recidivism by taking into account factors like previous arrests, age, and employment status. In 2016, ProPublica reported that the system’s risk scores were consistently biased against Black defendants, who were falsely labeled as “high-risk” to commit a future crime nearly twice as often as white defendants.
“When a full range of crimes were taken into account — including misdemeanors such as driving with an expired license — the algorithm was somewhat more accurate than a coin flip,” ProPublic concluded. “Of those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years. We also turned up significant racial disparities. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.”
But according to Farahany, it’s not just that the algorithm itself was biased. She cited an in-depth study by Cynthia Rudin, a computer scientist at Duke, who found that the discrimination “was more likely [due to] things like prior arrests and sociodemographic factors, which reflect existing bias in the system.”
The COMPAS example underscored a broader point that Farahany returned to: even if new technologies like AI do not introduce biases, they can replicate our society’s existing biases, and in some cases, make them worse. And the key to conducting oversight is actually understanding how these algorithms are built. But very few people have the technical knowledge to understand these models, and companies like Equivant, which created COMPAS, have been reluctant to share their algorithms publicly, for fear of revealing trade secrets.
That has caused Rudin, the Duke computer scientist, to argue that “the focus on the question of fairness is misplaced, as these algorithms fail to meet a more important yet readily obtainable goal: transparency.”
The challenge of stemming bias in the courtroom uses of AI seems to pale in comparison to some of the more existential questions surrounding artificial technologies. But they provide an early glimpse at some of the challenges around the potential for discrimination and the need for transparency.
“[We need to ask] the question, what is it that [AI is] replacing?” Farahany said. “Because if it’s better than the existing biased way of making bail determinations or sentencing determinations, then even if it has bias in it, it still might be better than our existing bias. And so you have to ask both questions. We can’t expect technology to be perfect. Humans aren’t perfect. The question is, is it better or is it worse?”
Outside of the courtroom, one of those ways in which law enforcement leaders have argued AI can make things better is in the context of crime prevention. CAFE Insider’s very own former co-host Anne Milgram, who now serves as head of the Drug Enforcement Agency (DEA), wrote in a 2017 The Hill op-ed, “Advances in artificial intelligence and machine learning have helped us build neural networks that are specific to the unique nature of crimes, and using new techniques like downsampling results in far higher accuracy than other systems in use today. Put another way: the technology of today allows us to provide real-time, precise information to police officers at the moment they need it.”
Let us know what you think. Write to us at letters@cafe.com.