RT Article T1 Almost politically acceptable criminal justice risk assessment JF Criminology & public policy VO 19 IS 4 SP 1231 OP 1257 A1 Berk, Richard A2 Elzarka, Ayya A. LA English YR 2020 UL https://krimdok.uni-tuebingen.de/Record/1742770754 AB Research In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that usually it is impossible optimize simultaneously all of the usual group definitions of fairness. In policy settings, one necessarily is left with tradeoffs about which many stakeholders will adamantly disagree. The result is a contentious stalemate. In this article, we offer a different approach. We do not seek perfectly accurate and perfectly fair risk assessments. We seek politically acceptable risk assessments. We describe and apply a machine learning approach that addresses many of the most visible claims of “racial bias” to arraignment data on 300,000 offenders. Regardless of whether such claims are true, we adjust our procedures to compensate. We train the algorithm on White offenders only and compute risk with test data separately for White offenders and Black offenders. Thus, the fitted, algorithm structure is the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the White-trained algorithm, we provide additional adjustments as needed. Policy Implications Insofar as conventional machine learning procedures do not produce the accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools. K1 Fairness K1 Forecasting K1 machine learning K1 racial bias K1 Risk Assessment DO 10.1111/1745-9133.12500