Can querying for bias leak protected attributes? achieving privacy with smooth sensitivity

F Hamman, J Chen, S Dutta - Proceedings of the 2023 ACM Conference …, 2023 - dl.acm.org
Existing regulations often prohibit model developers from accessing protected attributes
(gender, race, etc.) during training. This leads to scenarios where fairness assessments …

Fairness without demographic data: A survey of approaches

C Ashurst, A Weller - Proceedings of the 3rd ACM Conference on Equity …, 2023 - dl.acm.org
Detecting, measuring and mitigating various measures of unfairness are core aims of
algorithmic fairness research. However, the most prominent approaches require access to …

How to select physics-informed neural networks in the absence of ground truth: a Pareto front-based strategy

Z Wei, JC Wong, NWY Sung, A Gupta… - 1st Workshop on the …, 2023 - openreview.net
Physics-informed neural networks (PINNs) have been proposed as a potential route to
inverse modelling or mesh-free alternative to numerical methods for partial differential …

Fair Classifiers Without Fair Training: An Influence-Guided Data Sampling Approach

J Pang, J Wang, Z Zhu, Y Yao, C Qian, Y Liu - arXiv preprint arXiv …, 2024 - arxiv.org
A fair classifier should ensure the benefit of people from different groups, while the group
information is often sensitive and unsuitable for model training. Therefore, learning a fair …

[PDF][PDF] A Survey on Fairness Without Demographics

PJ Kenfack, SE Kahou, U Aïvodji - researchgate.net
The issue of bias in Machine Learning (ML) models is a significant challenge for the
machine learning community. Real-world biases can be embedded in the data used to train …