Nuanced metrics for measuring unintended bias with real data for text classification

D Borkan, L Dixon, J Sorensen, N Thain… - … proceedings of the …, 2019 - dl.acm.org
Unintended bias in Machine Learning can manifest as systemic differences in performance
for different demographic groups, potentially compounding existing challenges to fairness in …

Balanced datasets are not enough: Estimating and mitigating gender bias in deep image representations

T Wang, J Zhao, M Yatskar… - Proceedings of the …, 2019 - openaccess.thecvf.com
In this work, we present a framework to measure and mitigate intrinsic biases with respect to
protected variables-such as gender-in visual recognition tasks. We show that trained models …

Predictive biases in natural language processing models: A conceptual framework and overview

D Shah, HA Schwartz, D Hovy - arXiv preprint arXiv:1912.11078, 2019 - arxiv.org
An increasing number of works in natural language processing have addressed the effect of
bias on the predicted outcomes, introducing mitigation techniques that act on different parts …

Towards personalized fairness based on causal notion

Y Li, H Chen, S Xu, Y Ge, Y Zhang - … of the 44th International ACM SIGIR …, 2021 - dl.acm.org
Recommender systems are gaining increasing and critical impacts on human and society
since a growing number of users use them for information seeking and decision making …

A survey on gender bias in natural language processing

K Stanczak, I Augenstein - arXiv preprint arXiv:2112.14168, 2021 - arxiv.org
Language can be used as a means of reproducing and enforcing harmful stereotypes and
biases and has been analysed as such in numerous research. In this paper, we present a …

Edits: Modeling and mitigating data bias for graph neural networks

Y Dong, N Liu, B Jalaian, J Li - Proceedings of the ACM web conference …, 2022 - dl.acm.org
Graph Neural Networks (GNNs) have shown superior performance in analyzing attributed
networks in various web-based applications such as social recommendation and web …

Fairness in deep learning: A computational perspective

M Du, F Yang, N Zou, X Hu - IEEE Intelligent Systems, 2020 - ieeexplore.ieee.org
Fairness in deep learning has attracted tremendous attention recently, as deep learning is
increasingly being used in high-stake decision making applications that affect individual …

Dp-forward: Fine-tuning and inference on language models with differential privacy in forward pass

M Du, X Yue, SSM Chow, T Wang, C Huang… - Proceedings of the 2023 …, 2023 - dl.acm.org
Differentially private stochastic gradient descent (DP-SGD) adds noise to gradients in back-
propagation, safeguarding training data from privacy leakage, particularly membership …

Fairness in recommendation: A survey

Y Li, H Chen, S Xu, Y Ge, J Tan, S Liu… - arXiv preprint arXiv …, 2022 - arxiv.org
As one of the most pervasive applications of machine learning, recommender systems are
playing an important role on assisting human decision making. The satisfaction of users and …

Unlearning bias in language models by partitioning gradients

C Yu, S Jeoung, A Kasi, P Yu, H Ji - Findings of the Association for …, 2023 - aclanthology.org
Recent research has shown that large-scale pretrained language models, specifically
transformers, tend to exhibit issues relating to racism, sexism, religion bias, and toxicity in …