Almost politically acceptable criminal justice risk assessment

Research In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that usually it is impossible optimize simultaneously all of the usual group definitions of fairness. In policy settings, one necessarily is left...

Full description

Saved in:  
Bibliographic Details
Main Author: Berk, Richard (Author)
Contributors: Elzarka, Ayya A.
Format: Electronic Article
Language:English
Published: 2020
In: Criminology & public policy
Year: 2020, Volume: 19, Issue: 4, Pages: 1231-1257
Online Access: Presumably Free Access
Volltext (Resolving-System)
Journals Online & Print:
Drawer...
Check availability: HBZ Gateway
Keywords:
Description
Summary:Research In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that usually it is impossible optimize simultaneously all of the usual group definitions of fairness. In policy settings, one necessarily is left with tradeoffs about which many stakeholders will adamantly disagree. The result is a contentious stalemate. In this article, we offer a different approach. We do not seek perfectly accurate and perfectly fair risk assessments. We seek politically acceptable risk assessments. We describe and apply a machine learning approach that addresses many of the most visible claims of “racial bias” to arraignment data on 300,000 offenders. Regardless of whether such claims are true, we adjust our procedures to compensate. We train the algorithm on White offenders only and compute risk with test data separately for White offenders and Black offenders. Thus, the fitted, algorithm structure is the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the White-trained algorithm, we provide additional adjustments as needed. Policy Implications Insofar as conventional machine learning procedures do not produce the accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools.
ISSN:1745-9133
DOI:10.1111/1745-9133.12500