Bias mitigation for machine learning classifiers: A comprehensive survey

M Hort, Z Chen, JM Zhang, M Harman… - ACM Journal on …, 2024 - dl.acm.org
This article provides a comprehensive survey of bias mitigation methods for achieving
fairness in Machine Learning (ML) models. We collect a total of 341 publications concerning …

Towards out-of-distribution generalization: A survey

J Liu, Z Shen, Y He, X Zhang, R Xu, H Yu… - arXiv preprint arXiv …, 2021 - arxiv.org
Traditional machine learning paradigms are based on the assumption that both training and
test data follow the same statistical pattern, which is mathematically referred to as …

Toward Operationalizing Pipeline-aware ML Fairness: A Research Agenda for Developing Practical Guidelines and Tools

E Black, R Naidu, R Ghani, K Rodolfa, D Ho… - Proceedings of the 3rd …, 2023 - dl.acm.org
While algorithmic fairness is a thriving area of research, in practice, mitigating issues of bias
often gets reduced to enforcing an arbitrarily chosen fairness metric, either by enforcing …

Inherent tradeoffs in learning fair representations

H Zhao, GJ Gordon - Journal of Machine Learning Research, 2022 - jmlr.org
Real-world applications of machine learning tools in high-stakes domains are often
regulated to be fair, in the sense that the predicted target should satisfy some quantitative …

[PDF][PDF] On dyadic fairness: Exploring and mitigating bias in graph connections

P Li, Y Wang, H Zhao, P Hong, H Liu - International Conference on …, 2021 - par.nsf.gov
Disparate impact has raised serious concerns in machine learning applications and its
societal impacts. In response to the need of mitigating discrimination, fairness has been …

Achieving fairness at no utility cost via data reweighing with influence

P Li, H Liu - International Conference on Machine Learning, 2022 - proceedings.mlr.press
With the fast development of algorithmic governance, fairness has become a compulsory
property for machine learning models to suppress unintentional discrimination. In this paper …

On learning fairness and accuracy on multiple subgroups

C Shui, G Xu, Q Chen, J Li, CX Ling… - Advances in …, 2022 - proceedings.neurips.cc
We propose an analysis in fair learning that preserves the utility of the data while reducing
prediction disparities under the criteria of group sufficiency. We focus on the scenario where …

Mitigating political bias in language models through reinforced calibration

R Liu, C Jia, J Wei, G Xu, L Wang… - Proceedings of the AAAI …, 2021 - ojs.aaai.org
Current large-scale language models can be politically biased as a result of the data they
are trained on, potentially causing serious problems when they are deployed in real-world …

Differentially private and fair deep learning: A lagrangian dual approach

C Tran, F Fioretto, P Van Hentenryck - Proceedings of the AAAI …, 2021 - ojs.aaai.org
A critical concern in data-driven decision making is to build models whose outcomes do not
discriminate against some demographic groups, including gender, ethnicity, or age. To …

Fair and optimal classification via post-processing

R Xian, L Yin, H Zhao - International Conference on …, 2023 - proceedings.mlr.press
To mitigate the bias exhibited by machine learning models, fairness criteria can be
integrated into the training process to ensure fair treatment across all demographics, but it …