In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction

Objectives: We study interpretable recidivism prediction using machine learning (ML) models and analyze performance in terms of prediction ability, sparsity, and fairness. Unlike previous works, this study trains interpretable models that output probabilities rather than binary predictions, and uses...

Full description

Saved in:  
Bibliographic Details
Main Author: Han, Bin (Author)
Contributors: Wang, Caroline S. ; Rudin, Cynthia ; Patel, Bhrij
Format: Electronic Book
Language:English
Published: 2021
In:Year: 2021
Online Access: Volltext (kostenfrei)
Check availability: HBZ Gateway
Keywords:

MARC

LEADER 00000cam a22000002c 4500
001 1866908138
003 DE-627
005 20250115004301.0
007 cr uuu---uuuuu
008 231021s2021 xx |||||o 00| ||eng c
035 |a (DE-627)1866908138 
035 |a (DE-599)KXP1866908138 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 2,1  |2 ssgn 
100 1 |a Han, Bin  |e VerfasserIn  |4 aut 
245 1 0 |a In Pursuit of Interpretable, Fair and Accurate Machine Learning for Criminal Recidivism Prediction 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a Objectives: We study interpretable recidivism prediction using machine learning (ML) models and analyze performance in terms of prediction ability, sparsity, and fairness. Unlike previous works, this study trains interpretable models that output probabilities rather than binary predictions, and uses quantitative fairness definitions to assess the models. This study also examines whether models can generalize across geographic locations. Methods: We generated black-box and interpretable ML models on two different criminal recidivism datasets from Florida and Kentucky. We compared predictive performance and fairness of these models against two methods that are currently used in the justice system to predict pretrial recidivism: the Arnold PSA and COMPAS. We evaluated predictive performance of all models on predicting six different types of crime over two time spans. Results: Several interpretable ML models can predict recidivism as well as black-box ML models and are more accurate than COMPAS or the Arnold PSA. These models are potentially useful in practice. Similar to the Arnold PSA, some of these interpretable models can be written down as a simple table. Others can be displayed using a set of visualizations. Our geographic analysis indicates that ML models should be trained separately for separate locations and updated over time. We also present a fairness analysis for the interpretable models. Conclusions: Interpretable machine learning models can perform just as well as non-interpretable methods and currently-used risk assessment scales, in terms of both prediction accuracy and fairness. Machine learning models might be more accurate when trained separately for distinct locations and kept up-to-date 
650 4 |a slides 
700 1 |8 1\p  |a Wang, Caroline S.  |e VerfasserIn  |0 (DE-588)1049673891  |0 (DE-627)78244119X  |0 (DE-576)403653606  |4 aut 
700 1 |8 2\p  |a Rudin, Cynthia  |e VerfasserIn  |0 (DE-588)1050432932  |0 (DE-627)783980159  |0 (DE-576)404633412  |4 aut 
700 1 |a Patel, Bhrij  |e VerfasserIn  |4 aut 
856 4 0 |u http://arxiv.org/abs/2005.04176  |x Verlag  |z kostenfrei  |3 Volltext 
883 |8 1  |a cgwrk  |d 20241001  |q DE-101  |u https://d-nb.info/provenance/plan#cgwrk 
883 |8 2  |a cgwrk  |d 20241001  |q DE-101  |u https://d-nb.info/provenance/plan#cgwrk 
935 |a mkri 
951 |a BO 
ELC |a 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4394718198 
LOK |0 003 DE-627 
LOK |0 004 1866908138 
LOK |0 005 20231021043619 
LOK |0 008 231021||||||||||||||||ger||||||| 
LOK |0 035   |a (DE-2619)CORE85604267 
LOK |0 040   |a DE-2619  |c DE-627  |d DE-2619 
LOK |0 092   |o n 
LOK |0 852   |a DE-2619 
LOK |0 852 1  |9 00 
LOK |0 935   |a core 
OAS |a 1 
ORI |a SA-MARC-krimdoka001.raw