It seems like all over the place these days, weâre hearing about artificial intelligence. Weâre hearing about it in the context of manufacturing (which jobs will survive automation?), transportation (when will self-driving cars become a widespread thing?), and healthcare (will it transform radiology?) â weâre hearing about it everywhere.Â
The introduction of AI to our everyday lives raises policy questions around how to support the millions of workers whose jobs will be changed or eliminated. But it also raises questions about how to prevent potential discrimination in our legal system, which has already adopted certain machine learning tools. Itâs a problem made even more difficult by the publicâs lack of basic understanding of artificial technologies, and by companiesâ unwillingness to be transparent about their algorithms.Â
Last month, Preet was joined on Stay Tuned by Nita Farahany, a professor at Duke School of Law and an expert on the ethical implications of emerging technologies. Hereâs how she described the basic building blocks of AI: âA starting place that is important for artificial intelligence is that [behind] most artificial technology is machine learning algorithms. And these are basically just kind of big software programs that take large data sets and then are able to find patterns and make inferences from those patterns that we might not easily be able to make, or that would be very difficult to do without these algorithms. But if the data isnât very good or if itâs biased, you may end up with very biased patterns and results.â
Farahany spoke specifically about the use of AI in the courtroom, and in particular, its role in informing bail and sentencing decisions. âThere are systems like COMPAS, which is a system thatâs been used in the United States for making decisions to assist â to give risk scores â to judges. Or in New Jersey, thereâs a system that is being used for bail determination.â
COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions, is a popular AI software that determines ârisk scoresâ for recidivism by taking into account factors like previous arrests, age, and employment status. In 2016, ProPublica reported that the systemâs risk scores were consistently biased against Black defendants, who were falsely labeled as âhigh-riskâ to commit a future crime nearly twice as often as white defendants.Â
âWhen a full range of crimes were taken into account â including misdemeanors such as driving with an expired license â the algorithm was somewhat more accurate than a coin flip,â ProPublic concluded. âOf those deemed likely to re-offend, 61 percent were arrested for any subsequent crimes within two years. We also turned up significant racial disparities. The formula was particularly likely to falsely flag black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants.â
But according to Farahany, itâs not just that the algorithm itself was biased. She cited an in-depth study by Cynthia Rudin, a computer scientist at Duke, who found that the discrimination âwas more likely [due to] things like prior arrests and sociodemographic factors, which reflect existing bias in the system.â
The COMPAS example underscored a broader point that Farahany returned to: even if new technologies like AI do not introduce biases, they can replicate our societyâs existing biases, and in some cases, make them worse. And the key to conducting oversight is actually understanding how these algorithms are built. But very few people have the technical knowledge to understand these models, and companies like Equivant, which created COMPAS, have been reluctant to share their algorithms publicly, for fear of revealing trade secrets.Â
That has caused Rudin, the Duke computer scientist, to argue that âthe focus on the question of fairness is misplaced, as these algorithms fail to meet a more important yet readily obtainable goal: transparency.âÂ
The challenge of stemming bias in the courtroom uses of AI seems to pale in comparison to some of the more existential questions surrounding artificial technologies. But they provide an early glimpse at some of the challenges around the potential for discrimination and the need for transparency.Â
â[We need to ask] the question, what is it that [AI is] replacing?â Farahany said. âBecause if itâs better than the existing biased way of making bail determinations or sentencing determinations, then even if it has bias in it, it still might be better than our existing bias. And so you have to ask both questions. We canât expect technology to be perfect. Humans arenât perfect. The question is, is it better or is it worse?âÂ
Outside of the courtroom, one of those ways in which law enforcement leaders have argued AI can make things better is in the context of crime prevention. CAFE Insiderâs very own former co-host Anne Milgram, who now serves as head of the Drug Enforcement Agency (DEA), wrote in a 2017 The Hill op-ed, âAdvances in artificial intelligence and machine learning have helped us build neural networks that are specific to the unique nature of crimes, and using new techniques like downsampling results in far higher accuracy than other systems in use today. Put another way: the technology of today allows us to provide real-time, precise information to police officers at the moment they need it.âÂ
Let us know what you think. Write to us at letters@cafe.com.