Adaptive priority reweighing for generalizing fairness improvement

Z Hu, Y Xu, X Tian - 2023 International Joint Conference on …, 2023 - ieeexplore.ieee.org
Z Hu, Y Xu, X Tian
2023 International Joint Conference on Neural Networks (IJCNN), 2023ieeexplore.ieee.org
With the increasing penetration of Machine-Learning (ML) applications in critical decision-
making areas, calls for algorithmic fairness are more prominent. Though there have been
diverse modalities to improve algorithmic fairness through training the algorithms with
fairness constraints, their performance does not generalize well at the test set. A
performance-promising fair algorithm with better generalizability is needed. This paper
proposes a novel adaptive reweighing method to eliminate the impact of the distribution …
With the increasing penetration of Machine-Learning (ML) applications in critical decision-making areas, calls for algorithmic fairness are more prominent. Though there have been diverse modalities to improve algorithmic fairness through training the algorithms with fairness constraints, their performance does not generalize well at the test set. A performance-promising fair algorithm with better generalizability is needed. This paper proposes a novel adaptive reweighing method to eliminate the impact of the distribution shifts between training and test data on model generalizability. Specifically, instead of assigning a unified weight for each (sub)group as most previous reweighing methods propose, we granularly model the distance from the sample predictions to the decision boundary and assign higher individual weight to the samples closer to the decision boundary in each (sub)group. Our adaptive reweighing method prioritizes the samples closer to the decision boundary and assigns a higher weight to improve the generalizability of fair classifiers. We design extensive experiments to evaluate the generalizability of our adaptive priority reweighing method for accuracy and fairness measures (i.e., equal opportunity, equalized odds, and demographic parity.) in tabular benchmarks across Adult, COMPAS, and IPUMS. We further highlight the performance of our method in improving the fairness of language and vision models. We believe that our method shows promising results in improving the fairness of any pre-trained models simply via fine-tuning.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果