Dynamic fairness-Breaking vicious cycles in automatic decision making

B Paaßen, A Bunge, C Hainke, L Sindelar… - arXiv preprint arXiv …, 2019 - arxiv.org
In recent years, machine learning techniques have been increasingly applied in sensitive
decision making processes, raising fairness concerns. Past research has shown that …

Blind pareto fairness and subgroup robustness

NL Martinez, MA Bertran, A Papadaki… - International …, 2021 - proceedings.mlr.press
Much of the work in the field of group fairness addresses disparities between predefined
groups based on protected features such as gender, age, and race, which need to be …

Omnifair: A declarative system for model-agnostic group fairness in machine learning

H Zhang, X Chu, A Asudeh, SB Navathe - Proceedings of the 2021 …, 2021 - dl.acm.org
Machine learning (ML) is increasingly being used to make decisions in our society. ML
models, however, can be unfair to certain demographic groups (eg, African Americans or …

Fair classification via unconstrained optimization

I Alabdulmohsin - arXiv preprint arXiv:2005.14621, 2020 - arxiv.org
Achieving the Bayes optimal binary classification rule subject to group fairness constraints is
known to be reducible, in some cases, to learning a group-wise thresholding rule over the …

Robust optimization for fairness with noisy protected groups

S Wang, W Guo, H Narasimhan… - Advances in neural …, 2020 - proceedings.neurips.cc
Many existing fairness criteria for machine learning involve equalizing some metric across
protected groups such as race or gender. However, practitioners trying to audit or enforce …

Ensuring fairness beyond the training data

D Mandal, S Deng, S Jana, J Wing… - Advances in neural …, 2020 - proceedings.neurips.cc
We initiate the study of fair classifiers that are robust to perturbations in the training
distribution. Despite recent progress, the literature on fairness has largely ignored the …

Utility-Fairness Trade-Offs and How to Find Them

S Dehdashtian, B Sadeghi… - Proceedings of the …, 2024 - openaccess.thecvf.com
When building classification systems with demographic fairness considerations there are
two objectives to satisfy: 1) maximizing utility for the specific task and 2) ensuring fairness wrt …

Increasing Fairness via Combination with Learning Guarantees

Y Bian, K Zhang, A Qiu, N Chen - arXiv preprint arXiv:2301.10813, 2023 - arxiv.org
The concern about underlying discrimination hidden in machine learning (ML) models is
increasing, as ML systems have been widely applied in more and more real-world scenarios …

Parametric Fairness with Statistical Guarantees

F Hu, P Ratz, A Charpentier - arXiv preprint arXiv:2310.20508, 2023 - arxiv.org
Algorithmic fairness has gained prominence due to societal and regulatory concerns about
biases in Machine Learning models. Common group fairness metrics like Equalized Odds …

Enforcing delayed-impact fairness guarantees

A Weber, B Metevier, Y Brun, PS Thomas… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent research has shown that seemingly fair machine learning models, when used to
inform decisions that have an impact on peoples' lives or well-being (eg, applications …