Evaluating causes of algorithmic bias in juvenile criminal recidivism
Artificial Intelligence and Law, 2021•Springer
In this paper we investigate risk prediction of criminal re-offense among juvenile defendants
using general-purpose machine learning (ML) algorithms. We show that in our dataset,
containing hundreds of cases, ML models achieve better predictive power than a structured
professional risk assessment tool, the Structured Assessment of Violence Risk in Youth
(SAVRY), at the expense of not satisfying relevant group fairness metrics that SAVRY does
satisfy. We explore in more detail two possible causes of this algorithmic bias that are …
using general-purpose machine learning (ML) algorithms. We show that in our dataset,
containing hundreds of cases, ML models achieve better predictive power than a structured
professional risk assessment tool, the Structured Assessment of Violence Risk in Youth
(SAVRY), at the expense of not satisfying relevant group fairness metrics that SAVRY does
satisfy. We explore in more detail two possible causes of this algorithmic bias that are …
Abstract
In this paper we investigate risk prediction of criminal re-offense among juvenile defendants using general-purpose machine learning (ML) algorithms. We show that in our dataset, containing hundreds of cases, ML models achieve better predictive power than a structured professional risk assessment tool, the Structured Assessment of Violence Risk in Youth (SAVRY), at the expense of not satisfying relevant group fairness metrics that SAVRY does satisfy. We explore in more detail two possible causes of this algorithmic bias that are related to biases in the data with respect to two protected groups, foreigners and women. In particular, we look at (1) the differences in the prevalence of re-offense between protected groups and (2) the influence of protected group or correlated features in the prediction. Our experiments show that both can lead to disparity between groups on the considered group fairness metrics. We observe that methods to mitigate the influence of either cause do not guarantee fair outcomes. An analysis of feature importance using LIME, a machine learning interpretability method, shows that some mitigation methods can shift the set of features that ML techniques rely on away from demographics and criminal history which are highly correlated with sensitive features.
Springer
以上显示的是最相近的搜索结果。 查看全部搜索结果