Almost Politically Acceptable Criminal Justice Risk Assessment

In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that it is impossible optimize at once all of the usual group definitions of fairness. In the policy arena, one is left with tradeoffs about which many sta...

Full description

Saved in:  
Bibliographic Details
Main Author: Berk, Richard A. (Author)
Contributors: Elzarka, Ayya A.
Format: Electronic Book
Language:English
Published: 2019
In:Year: 2019
Online Access: Volltext (kostenfrei)
Check availability: HBZ Gateway

MARC

LEADER 00000nam a22000002 4500
001 1866152653
003 DE-627
005 20231018043719.0
007 cr uuu---uuuuu
008 231018s2019 xx |||||o 00| ||eng c
035 |a (DE-627)1866152653 
035 |a (DE-599)KXP1866152653 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 2,1  |2 ssgn 
100 1 |a Berk, Richard A.  |e VerfasserIn  |4 aut 
245 1 0 |a Almost Politically Acceptable Criminal Justice Risk Assessment 
264 1 |c 2019 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a In criminal justice risk forecasting, one can prove that it is impossible to optimize accuracy and fairness at the same time. One can also prove that it is impossible optimize at once all of the usual group definitions of fairness. In the policy arena, one is left with tradeoffs about which many stakeholders will adamantly disagree. In this paper, we offer a different approach. We do not seek perfectly accurate and fair risk assessments. We seek politically acceptable risk assessments. We describe and apply to data on 300,000 offenders a machine learning approach that responds to many of the most visible charges of "racial bias." Regardless of whether such claims are true, we adjust our procedures to compensate. We begin by training the algorithm on White offenders only and computing risk with test data separately for White offenders and Black offenders. Thus, the fitted algorithm structure is exactly the same for both groups; the algorithm treats all offenders as if they are White. But because White and Black offenders can bring different predictors distributions to the white-trained algorithm, we provide additional adjustments if needed. Insofar are conventional machine learning procedures do not produce accuracy and fairness that some stakeholders require, it is possible to alter conventional practice to respond explicitly to many salient stakeholder claims even if they are unsupported by the facts. The results can be a politically acceptable risk assessment tools.Comment: 29 pages,5 figure 
700 1 |a Elzarka, Ayya A.  |e VerfasserIn  |4 aut 
856 4 0 |u http://arxiv.org/abs/1910.11410  |x Verlag  |z kostenfrei  |3 Volltext 
912 |a NOMM 
935 |a mkri 
951 |a BO 
ELC |a 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4391833417 
LOK |0 003 DE-627 
LOK |0 004 1866152653 
LOK |0 005 20231018043719 
LOK |0 008 231018||||||||||||||||ger||||||| 
LOK |0 035   |a (DE-2619)CORE89582175 
LOK |0 040   |a DE-2619  |c DE-627  |d DE-2619 
LOK |0 092   |o n 
LOK |0 852   |a DE-2619 
LOK |0 852 1  |9 00 
LOK |0 935   |a core 
OAS |a 1 
ORI |a SA-MARC-krimdoka001.raw