It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks

Risk assessment instrument (RAI) datasets, particularly ProPublica's COMPAS dataset, are commonly used in algorithmic fairness papers due to benchmarking practices of comparing algorithms on datasets used in prior work. In many cases, this data is used as a benchmark to demonstrate good perform...

Full description

Saved in:  
Bibliographic Details
Main Author: Bao, Michelle (Author)
Contributors: Zottola, Samantha ; Zhou, Angela ; Venkatasubramanian, Suresh ; Lum, Kristian ; Horowitz, Aaron ; Desmarais, Sarah ; Brubach, Brian
Format: Electronic Book
Language:English
Published: 2021
In:Year: 2021
Online Access: Volltext (kostenfrei)
Check availability: HBZ Gateway

MARC

LEADER 00000cam a22000002c 4500
001 1866581848
003 DE-627
005 20250113054909.0
007 cr uuu---uuuuu
008 231020s2021 xx |||||o 00| ||eng c
035 |a (DE-627)1866581848 
035 |a (DE-599)KXP1866581848 
040 |a DE-627  |b ger  |c DE-627  |e rda 
041 |a eng 
084 |a 2,1  |2 ssgn 
100 1 |a Bao, Michelle  |e VerfasserIn  |4 aut 
245 1 0 |a It's COMPASlicated: The Messy Relationship between RAI Datasets and Algorithmic Fairness Benchmarks 
264 1 |c 2021 
336 |a Text  |b txt  |2 rdacontent 
337 |a Computermedien  |b c  |2 rdamedia 
338 |a Online-Ressource  |b cr  |2 rdacarrier 
520 |a Risk assessment instrument (RAI) datasets, particularly ProPublica's COMPAS dataset, are commonly used in algorithmic fairness papers due to benchmarking practices of comparing algorithms on datasets used in prior work. In many cases, this data is used as a benchmark to demonstrate good performance without accounting for the complexities of criminal justice (CJ) processes. However, we show that pretrial RAI datasets can contain numerous measurement biases and errors, and due to disparities in discretion and deployment, algorithmic fairness applied to RAI datasets is limited in making claims about real-world outcomes. These reasons make the datasets a poor fit for benchmarking under assumptions of ground truth and real-world impact. Furthermore, conventional practices of simply replicating previous data experiments may implicitly inherit or edify normative positions without explicitly interrogating value-laden assumptions. Without context of how interdisciplinary fields have engaged in CJ research and context of how RAIs operate upstream and downstream, algorithmic fairness practices are misaligned for meaningful contribution in the context of CJ, and would benefit from transparent engagement with normative considerations and values related to fairness, justice, and equality. These factors prompt questions about whether benchmarks for intrinsically socio-technical systems like the CJ system can exist in a beneficial and ethical way 
700 1 |a Zottola, Samantha  |e VerfasserIn  |4 aut 
700 1 |a Zhou, Angela  |e VerfasserIn  |4 aut 
700 1 |a Venkatasubramanian, Suresh  |e VerfasserIn  |4 aut 
700 1 |a Lum, Kristian  |e VerfasserIn  |4 aut 
700 1 |a Horowitz, Aaron  |e VerfasserIn  |4 aut 
700 1 |a Desmarais, Sarah  |e VerfasserIn  |4 aut 
700 1 |a Brubach, Brian  |e VerfasserIn  |4 aut 
856 4 0 |u http://arxiv.org/abs/2106.05498  |x Verlag  |z kostenfrei  |3 Volltext 
935 |a mkri 
951 |a BO 
ELC |a 1 
LOK |0 000 xxxxxcx a22 zn 4500 
LOK |0 001 4394218438 
LOK |0 003 DE-627 
LOK |0 004 1866581848 
LOK |0 005 20231020043627 
LOK |0 008 231020||||||||||||||||ger||||||| 
LOK |0 035   |a (DE-2619)CORE113890605 
LOK |0 040   |a DE-2619  |c DE-627  |d DE-2619 
LOK |0 092   |o n 
LOK |0 852   |a DE-2619 
LOK |0 852 1  |9 00 
LOK |0 935   |a core 
OAS |a 1 
ORI |a SA-MARC-krimdoka001.raw