Awareness in practice: tensions in access to sensitive attribute data for antidiscrimination

M Bogen, A Rieke, S Ahmed - Proceedings of the 2020 conference on …, 2020 - dl.acm.org
Organizations cannot address demographic disparities that they cannot see. Recent
research on machine learning and fairness has emphasized that awareness of sensitive …

What we can't measure, we can't understand: Challenges to demographic data procurement in the pursuit of fairness

MK Andrus, E Spitzer, J Brown, A Xiang - Proceedings of the 2021 ACM …, 2021 - dl.acm.org
As calls for fair and unbiased algorithmic systems increase, so too does the number of
individuals working on algorithmic fairness in industry. However, these practitioners often do …

Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data

M Veale, R Binns - Big Data & Society, 2017 - journals.sagepub.com
Decisions based on algorithmic, machine learning models can be unfair, reproducing biases
in historical data used to train them. While computational techniques are emerging to …

The Privacy-Bias Tradeoff: Data Minimization and Racial Disparity Assessments in US Government

J King, D Ho, A Gupta, V Wu… - Proceedings of the 2023 …, 2023 - dl.acm.org
An emerging concern in algorithmic fairness is the tension with privacy interests. Data
minimization can restrict access to protected attributes, such as race and ethnicity, for bias …

Can querying for bias leak protected attributes? achieving privacy with smooth sensitivity

F Hamman, J Chen, S Dutta - Proceedings of the 2023 ACM Conference …, 2023 - dl.acm.org
Existing regulations often prohibit model developers from accessing protected attributes
(gender, race, etc.) during training. This leads to scenarios where fairness assessments …

How redundant are redundant encodings? blindness in the wild and racial disparity when race is unobserved

L Cheng, IO Gallegos, D Ouyang, J Goldin… - Proceedings of the 2023 …, 2023 - dl.acm.org
We address two emerging concerns in algorithmic fairness:(i) redundant encodings of race–
the notion that machine learning models encode race with probability nearing one as the …

Themis-ml: A fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation

N Bantilan - Journal of Technology in Human Services, 2018 - Taylor & Francis
As more industries integrate machine learning into socially sensitive decision processes like
hiring, loan-approval, and parole-granting, we are at risk of perpetuating historical and …

Data augmentation for discrimination prevention and bias disambiguation

S Sharma, Y Zhang, JM Ríos Aliaga… - Proceedings of the …, 2020 - dl.acm.org
Machine learning models are prone to biased decisions due to biases in the datasets they
are trained on. In this paper, we introduce a novel data augmentation technique to create a …

Fairness under unawareness: Assessing disparity when protected class is unobserved

J Chen, N Kallus, X Mao, G Svacha… - Proceedings of the …, 2019 - dl.acm.org
Assessing the fairness of a decision making system with respect to a protected class, such
as gender or race, is challenging when class membership labels are unavailable …

Toward accountable discrimination-aware data mining: the Importance of keeping the human in the loop—and under the looking glass

B Berendt, S Preibusch - Big data, 2017 - liebertpub.com
Abstract “Big Data” and data-mined inferences are affecting more and more of our lives, and
concerns about their possible discriminatory effects are growing. Methods for discrimination …