作者
Cecilia Panigutti, Alan Perotti, André Panisson, Paolo Bajardi, Dino Pedreschi
发表日期
2021/9/1
期刊
Information Processing & Management
卷号
58
期号
5
页码范围
102657
出版商
Pergamon
简介
The pervasive application of algorithmic decision-making is raising concerns on the risk of unintended bias in AI systems deployed in critical settings such as healthcare. The detection and mitigation of model bias is a very delicate task that should be tackled with care and involving domain experts in the loop. In this paper we introduce FairLens, a methodology for discovering and explaining biases. We show how this tool can audit a fictional commercial black-box model acting as a clinical decision support system (DSS). In this scenario, the healthcare facility experts can use FairLens on their historical data to discover the biases of the model before incorporating it into the clinical decision flow. FairLens first stratifies the available patient data according to demographic attributes such as age, ethnicity, gender and healthcare insurance; it then assesses the model performance on such groups highlighting the most …
引用总数
2020202120222023202417193616
学术搜索中的文章
C Panigutti, A Perotti, A Panisson, P Bajardi… - Information Processing & Management, 2021