A review on machine unlearning

H Zhang, T Nakamura, T Isohara, K Sakurai - SN Computer Science, 2023 - Springer
Recently, an increasing number of laws have governed the useability of users' privacy. For
example, Article 17 of the General Data Protection Regulation (GDPR), the right to be …

Fast yet effective machine unlearning

AK Tarun, VS Chundawat, M Mandal… - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Unlearning the data observed during the training of a machine learning (ML) model is an
important task that can play a pivotal role in fortifying the privacy and security of ML-based …

Influence functions in deep learning are fragile

S Basu, P Pope, S Feizi - arXiv preprint arXiv:2006.14651, 2020 - arxiv.org
Influence functions approximate the effect of training samples in test-time predictions and
have a wide variety of applications in machine learning interpretability and uncertainty …

From development to deployment: dataset shift, causality, and shift-stable models in health AI

A Subbaswamy, S Saria - Biostatistics, 2020 - academic.oup.com
The deployment of machine learning (ML) and statistical models is beginning to transform
the practice of healthcare, with models now able to help clinicians diagnose conditions like …

Approximate data deletion from machine learning models

Z Izzo, MA Smart, K Chaudhuri… - … Conference on Artificial …, 2021 - proceedings.mlr.press
Deleting data from a trained machine learning (ML) model is a critical task in many
applications. For example, we may want to remove the influence of training points that might …

Mixed-privacy forgetting in deep networks

A Golatkar, A Achille, A Ravichandran… - Proceedings of the …, 2021 - openaccess.thecvf.com
We show that the influence of a subset of the training samples can be removed--or"
forgotten"--from the weights of a network trained on large-scale image classification tasks …

On the accuracy of influence functions for measuring group effects

PWW Koh, KS Ang, H Teo… - Advances in neural …, 2019 - proceedings.neurips.cc
Influence functions estimate the effect of removing a training point on a model without the
need to retrain. They are based on a first-order Taylor approximation that is guaranteed to …

Achieving fairness at no utility cost via data reweighing with influence

P Li, H Liu - International Conference on Machine Learning, 2022 - proceedings.mlr.press
With the fast development of algorithmic governance, fairness has become a compulsory
property for machine learning models to suppress unintentional discrimination. In this paper …

Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations

A Golatkar, A Achille, S Soatto - … Conference, Glasgow, UK, August 23–28 …, 2020 - Springer
We describe a procedure for removing dependency on a cohort of training data from a
trained deep network that improves upon and generalizes previous methods to different …

Survey: Leakage and privacy at inference time

M Jegorova, C Kaul, C Mayor, AQ O'Neil… - … on Pattern Analysis …, 2022 - ieeexplore.ieee.org
Leakage of data from publicly available Machine Learning (ML) models is an area of
growing significance since commercial and government applications of ML can draw on …