Can you make AI fairer than a judge? Play our courtroom algorithm game


Can you make AI fairer than a judge? Play our courtroom algorithm game

The US criminal legal system uses predictive algorithms to try to make the judicial process less biased. But there’s a deeper problem    .

As a child, you develop a sense of what “fairness” means. It’s a concept that you learn early on as you come to terms with the world around you. Something either feels fair or it doesn’t.

But increasingly, algorithms have begun to arbitrate fairness for us. They decide who sees housing ads, who gets hired or fired, and even who gets sent to jail. Consequently, the people who create them—software engineers—are being asked to articulate what it means to be fair in their code. This is why regulators around the world are now grappling with a question: How can you mathematically quantify fairness?

This story attempts to offer an answer. And to do so, we need your help. We’re going to walk through a real algorithm, one used to decide who gets sent to jail, and ask you to tweak its various parameters to make its outcomes more fair. (Don’t worry—this won’t involve looking at code!)

Continue reading… “Can you make AI fairer than a judge? Play our courtroom algorithm game”

How to Upgrade Judges with Machine Learning

When should a criminal defendant be required to await trial in jail rather than at home? Software could significantly improve judges’ ability to make that call—reducing crime or the number of people stuck waiting in jail.

In a new study from the National Bureau of Economic Research, economists and computer scientists trained an algorithm to predict whether defendants were a flight risk from their rap sheet and court records using data from hundreds of thousands of cases in New York City. When tested on over a hundred thousand more cases that it hadn’t seen before, the algorithm proved better at predicting what defendants will do after release than judges.

Continue reading… “How to Upgrade Judges with Machine Learning”