Evaluating Fairness of Algorithmic Risk Assessment Instruments: The Problem With Forcing Dichotomies

Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages,...

Ausführliche Beschreibung

Gespeichert in:  
Bibliographische Detailangaben
1. VerfasserIn: Zottola, Samantha A. (VerfasserIn)
Beteiligte: Desmarais, Sarah L. ; Lowder, Evan Marie ; Duhart Clarke, Sarah E.
Medienart: Elektronisch Aufsatz
Sprache:Englisch
Veröffentlicht: 2022
In: Criminal justice and behavior
Jahr: 2022, Band: 49, Heft: 3, Seiten: 389-410
Online-Zugang: Volltext (lizenzpflichtig)
Journals Online & Print:
Lade...
Verfügbarkeit prüfen: HBZ Gateway
Schlagwörter:
Beschreibung
Zusammenfassung:Researchers and stakeholders have developed many definitions to evaluate whether algorithmic pretrial risk assessment instruments are fair in terms of their error and accuracy. Error and accuracy are often operationalized using three sets of indicators: false-positive and false-negative percentages, false-positive and false-negative rates, and positive and negative predictive value. To calculate these indicators, a threshold must be set, and continuous risk scores must be dichotomized. We provide a data-driven examination of these three sets of indicators using data from three studies on the most widely used algorithmic pretrial risk assessment instruments: the Public Safety Assessment, the Virginia Pretrial Risk Assessment Instrument, and the Federal Pretrial Risk Assessment. Overall, our findings highlight how conclusions regarding fairness are affected by the limitations of these indicators. Future work should move toward examining whether there are biases in how the risk assessment scores are used to inform decision-making.
ISSN:1552-3594
DOI:10.1177/00938548211040544