A survey of machine unlearning

TT Nguyen, TT Huynh, PL Nguyen, AWC Liew… - arXiv preprint arXiv …, 2022 - arxiv.org
Today, computer systems hold large amounts of personal data. Yet while such an
abundance of data allows breakthroughs in artificial intelligence, and especially machine …

Efficient attribute unlearning: Towards selective removal of input attributes from feature representations

T Guo, S Guo, J Zhang, W Xu, J Wang - arXiv preprint arXiv:2202.13295, 2022 - arxiv.org
Recently, the enactment of privacy regulations has promoted the rise of the machine
unlearning paradigm. Existing studies of machine unlearning mainly focus on sample-wise …

GBDF: gender balanced deepfake dataset towards fair deepfake detection

AV Nadimpalli, A Rattani - International Conference on Pattern …, 2022 - Springer
Facial forgery by deepfakes has raised severe societal concerns. Several solutions have
been proposed by the vision community to effectively combat the misinformation on the …

Fairness in face presentation attack detection

M Fang, W Yang, A Kuijper, V Struc, N Damer - Pattern Recognition, 2024 - Elsevier
Face recognition (FR) algorithms have been proven to exhibit discriminatory behaviors
against certain demographic and non-demographic groups, raising ethical and legal …

Robustness disparities in face detection

S Dooley, GZ Wei, T Goldstein… - Advances in Neural …, 2022 - proceedings.neurips.cc
Facial analysis systems have been deployed by large companies and critiqued by scholars
and activists for the past decade. Many existing algorithmic audits examine the performance …

Learning to split for automatic bias detection

Y Bao, R Barzilay - arXiv preprint arXiv:2204.13749, 2022 - arxiv.org
Classifiers are biased when trained on biased datasets. As a remedy, we propose Learning
to Split (ls), an algorithm for automatic bias detection. Given a dataset with input-label pairs …

A novel approach for bias mitigation of gender classification algorithms using consistency regularization

A Krishnan, A Rattani - Image and Vision Computing, 2023 - Elsevier
Published research has confirmed the bias of automated face-based gender classification
algorithms across gender-racial groups. Specifically, unequal accuracy rates were obtained …

Zero-shot racially balanced dataset generation using an existing biased stylegan2

A Jain, N Memon, J Togelius - 2023 IEEE International Joint …, 2023 - ieeexplore.ieee.org
Facial recognition systems have made significant strides thanks to data-heavy deep learning
models, but these models rely on large privacy-sensitive datasets. Further, many of these …

Adventures of Trustworthy Vision-Language Models: A Survey

M Vatsa, A Jain, R Singh - Proceedings of the AAAI Conference on …, 2024 - ojs.aaai.org
Recently, transformers have become incredibly popular in computer vision and vision-
language tasks. This notable rise in their usage can be primarily attributed to the capabilities …

On responsible machine learning datasets emphasizing fairness, privacy and regulatory norms with examples in biometrics and healthcare

S Mittal, K Thakral, R Singh, M Vatsa, T Glaser… - Nature Machine …, 2024 - nature.com
Artificial Intelligence (AI) has seamlessly integrated into numerous scientific domains,
catalysing unparalleled enhancements across a broad spectrum of tasks; however, its …