Evaluating Fairness of Algorithmic Risk Assessment Instruments: The Problem With Forcing Dichotomies

Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages,...

Descripción completa

Guardado en:  
Detalles Bibliográficos
Autor principal: Zottola, Samantha A. (Autor)
Otros Autores: Desmarais, Sarah L. ; Lowder, Evan Marie ; Duhart Clarke, Sarah E.
Tipo de documento: Electrónico Artículo
Lenguaje:Inglés
Publicado: 2022
En: Criminal justice and behavior
Año: 2022, Volumen: 49, Número: 3, Páginas: 389-410
Acceso en línea: Volltext (lizenzpflichtig)
Journals Online & Print:
Gargar...
Verificar disponibilidad: HBZ Gateway
Palabras clave:
Descripción
Sumario:Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages, false-positive and false-negative rates, and positive and negative predictive value. To calculate these indicators, a threshold must be set, and continuous risk scores must be dichotomized. We provide a data-driven examination of these three sets of indicators using data from three studies on the most widely used algorithmic pretrial risk assessment instruments: the Public Safety Assessment, the Virginia Pretrial Risk Assessment Instrument, and the Federal Pretrial Risk Assessment. Overall, our findings highlight how conclusions regarding fairness are affected by the limitations of these indicators. Future work should move toward examining whether there are biases in how the risk assessment scores are used to inform decision-making.
ISSN:1552-3594
DOI:10.1177/00938548211040544