Assessing the fairness of ai systems: Ai practitioners' processes, challenges, and needs for support
Various tools and practices have been developed to support practitioners in identifying,
assessing, and mitigating fairness-related harms caused by AI systems. However, prior …
assessing, and mitigating fairness-related harms caused by AI systems. However, prior …
A systematic study of bias amplification
Recent research suggests that predictions made by machine-learning models can amplify
biases present in the training data. When a model amplifies bias, it makes certain …
biases present in the training data. When a model amplifies bias, it makes certain …
De-biasing “bias” measurement
When a model's performance differs across socially or culturally relevant groups–like race,
gender, or the intersections of many such groups–it is often called” biased.” While much of …
gender, or the intersections of many such groups–it is often called” biased.” While much of …
Vision-language models performing zero-shot tasks exhibit gender-based disparities
We explore the extent to which zero-shot vision-language models exhibit gender bias for
different vision tasks. Vision models traditionally required task-specific labels for …
different vision tasks. Vision models traditionally required task-specific labels for …
A comparison of approaches to improve worst-case predictive model performance over patient subpopulations
Predictive models for clinical outcomes that are accurate on average in a patient population
may underperform drastically for some subpopulations, potentially introducing or reinforcing …
may underperform drastically for some subpopulations, potentially introducing or reinforcing …
Net benefit, calibration, threshold selection, and training objectives for algorithmic fairness in healthcare
A growing body of work uses the paradigm of algorithmic fairness to frame the development
of techniques to anticipate and proactively mitigate the introduction or exacerbation of health …
of techniques to anticipate and proactively mitigate the introduction or exacerbation of health …
Gaps in the Safety Evaluation of Generative AI
Generative AI systems produce a range of ethical and social risks. Evaluation of these risks
is a critical step on the path to ensuring the safety of these systems. However, evaluation …
is a critical step on the path to ensuring the safety of these systems. However, evaluation …
Disentangling and operationalizing AI fairness at linkedin
Operationalizing AI fairness at LinkedIn's scale is challenging not only because there are
multiple mutually incompatible definitions of fairness but also because determining what is …
multiple mutually incompatible definitions of fairness but also because determining what is …
[HTML][HTML] Evaluating algorithmic fairness in the presence of clinical guidelines: the case of atherosclerotic cardiovascular disease risk estimation
Objectives The American College of Cardiology and the American Heart Association
guidelines on primary prevention of atherosclerotic cardiovascular disease (ASCVD) …
guidelines on primary prevention of atherosclerotic cardiovascular disease (ASCVD) …
Towards responsible natural language annotation for the varieties of Arabic
AS Bergman, MT Diab - arXiv preprint arXiv:2203.09597, 2022 - arxiv.org
When building NLP models, there is a tendency to aim for broader coverage, often
overlooking cultural and (socio) linguistic nuance. In this position paper, we make the case …
overlooking cultural and (socio) linguistic nuance. In this position paper, we make the case …