We can’t end mass incarceration without first changing what happens before trial. On any given day, the United States incarcerates nearly half a million people who have only been accused of a crime and await their day in court.
In response, many cities and counties have started to use algorithms that try to predict people’s future criminal behavior, known as risk assessments. As researchers in the fields of sociology, data science and law, we believe pretrial risk assessment tools are fundamentally flawed. They give judges recommendations that make future violence seem more predictable and more certain than it actually is. In the process, risk assessments may perpetuate the misconceptions and fears that drive mass incarceration.
More than 30 years ago, the Supreme Court affirmed in United States v. Salerno that “liberty is the norm, and detention prior to trial or without trial is the carefully limited exception.” In practice, the opposite is true. The United States accounts for only 4 percent of the global population, but 20 percent of the global pretrial jail population. Current pretrial incarceration rates defy all historical norms. There are more legally innocent people behind bars in America today than there were convicted people in jails and prisons in 1980.
America’s extreme pretrial incarceration rates are driven by an inflated sense of the risk of pretrial violence and a single response to that risk: jail. Our work has revealed that one question, above all others, motivates judges’ decisions to release or jail someone before trial: Will this person commit a violent crime?
One judge explained his thinking to us. “You don’t want to be the judge that releases someone,” he said, who “goes out and does something horrible.” This fear has led judges to systematically overestimate pretrial violence. Violent crime is quite rare. Even in cities with high crime rates and high rates of pretrial release, it’s uncommon for someone to commit violence while awaiting trial.
Take Washington, for instance. The district releases 94 percent of people accused of a crime. Only 2 percent of those people are arrested for a violent crime while on release. But even though rates of pretrial violence are in the single digits across the country, it’s common for states to lock up 30 percent or more of the people awaiting trial.
To fix this, jurisdictions across the country have embraced algorithmic risk assessments. The hope is that these tools can harness big data to help judges make more informed, accurate decisions, thereby reducing jail populations while maintaining public safety.
By crunching large volumes of criminal history data, risk assessment algorithms try to calculate a person’s risk of future violence based on patterns of how often people with similar characteristics were arrested for a violent crime in the past. Different algorithms draw on different personal characteristics, like prior convictions, length of current employment or even ZIP code. Some even consider whether a person owns or rents a home or has a cellphone.
Algorithmic risk assessments are touted as being more objective and accurate than judges in predicting future violence. Across the political spectrum, these tools have become the darling of bail reform. But their success rests on the hope that risk assessments can be a valuable course corrector for judges’ faulty human intuition.
When it comes to predicting violence, risk assessments offer more magical thinking than helpful forecasting. We and other researchers have written a statement about the fundamental technical flaws with these tools.
Risk assessments are virtually useless for identifying who will commit violence if released pretrial. Consider the pre-eminent risk assessment tool on the market today, the Public Safety Assessment, or P.S.A., adopted in New Jersey, Kentucky and various counties across the country. In these jurisdictions, the P.S.A. assesses every person accused of a crime and flags them as either at risk for “new violent criminal activity” or not. A judge sees whether the person has been flagged for violence and, depending on the jurisdiction, may receive an automatic recommendation to release or detain.
Risk assessments’ simple labels obscure the deep uncertainty of their actual predictions. Largely because pretrial violence is so rare, it is virtually impossible for any statistical model to identify people who are more likely than not to commit a violent crime.
The P.S.A. predicts that 92 percent of the people that the algorithm flags for pretrial violence will not get arrested for a violent crime. The fact is, a vast majority of even the highest-risk individuals will not commit a violent crime while awaiting trial. If these tools were calibrated to be as accurate as possible, then they would simply predict that every person is unlikely to commit a violent crime while on pretrial release.
Instead, the P.S.A. sacrifices accuracy for the sake of making questionable distinctions among people who all have a low, indeterminate or incalculable likelihood of violence. Algorithmic risk assessments label people as at risk for violence without providing judges any sense of the underlying likelihood or uncertainty of this prediction. As a result, these tools could easily lead judges to overestimate the risk of pretrial violence and detain far more people than is justified.
These limits may offer a broader lesson for the project of reducing mass incarceration. Applying “big data” forecasting to our existing criminal justice practices is not just inadequate — it also risks cementing the irrational fears and flawed logic of mass incarceration behind a veneer of scientific objectivity. Neither judges nor software can know in advance who will and who won’t commit violent crime. Risk assessments are a case study of how a real-world “Minority Report” doesn’t work.
This doesn’t mean the task of violence prevention is pointless. We must look beyond preventive incarceration and adopt more holistic ways to improve public safety. Policy solutions cannot be limited to locking up the “right” people; they must address public safety through broader social policies and community investment.
Chelsea Barabas and Karthik Dinakar are research scientists at M.I.T. Colin Doyle is a lawyer at the Criminal Justice Policy Program at Harvard Law School.
Follow @privacyproject on Twitter and The New York Times Opinion Section on Facebook and Instagram.
Source: Read Full Article