A review on machine unlearning
H Zhang, T Nakamura, T Isohara, K Sakurai - SN Computer Science, 2023 - Springer
Recently, an increasing number of laws have governed the useability of users' privacy. For
example, Article 17 of the General Data Protection Regulation (GDPR), the right to be …
example, Article 17 of the General Data Protection Regulation (GDPR), the right to be …
Fast yet effective machine unlearning
Unlearning the data observed during the training of a machine learning (ML) model is an
important task that can play a pivotal role in fortifying the privacy and security of ML-based …
important task that can play a pivotal role in fortifying the privacy and security of ML-based …
Influence functions in deep learning are fragile
Influence functions approximate the effect of training samples in test-time predictions and
have a wide variety of applications in machine learning interpretability and uncertainty …
have a wide variety of applications in machine learning interpretability and uncertainty …
From development to deployment: dataset shift, causality, and shift-stable models in health AI
A Subbaswamy, S Saria - Biostatistics, 2020 - academic.oup.com
The deployment of machine learning (ML) and statistical models is beginning to transform
the practice of healthcare, with models now able to help clinicians diagnose conditions like …
the practice of healthcare, with models now able to help clinicians diagnose conditions like …
Approximate data deletion from machine learning models
Deleting data from a trained machine learning (ML) model is a critical task in many
applications. For example, we may want to remove the influence of training points that might …
applications. For example, we may want to remove the influence of training points that might …
Mixed-privacy forgetting in deep networks
We show that the influence of a subset of the training samples can be removed--or"
forgotten"--from the weights of a network trained on large-scale image classification tasks …
forgotten"--from the weights of a network trained on large-scale image classification tasks …
On the accuracy of influence functions for measuring group effects
PWW Koh, KS Ang, H Teo… - Advances in neural …, 2019 - proceedings.neurips.cc
Influence functions estimate the effect of removing a training point on a model without the
need to retrain. They are based on a first-order Taylor approximation that is guaranteed to …
need to retrain. They are based on a first-order Taylor approximation that is guaranteed to …
Achieving fairness at no utility cost via data reweighing with influence
With the fast development of algorithmic governance, fairness has become a compulsory
property for machine learning models to suppress unintentional discrimination. In this paper …
property for machine learning models to suppress unintentional discrimination. In this paper …
Forgetting outside the box: Scrubbing deep networks of information accessible from input-output observations
We describe a procedure for removing dependency on a cohort of training data from a
trained deep network that improves upon and generalizes previous methods to different …
trained deep network that improves upon and generalizes previous methods to different …
Survey: Leakage and privacy at inference time
Leakage of data from publicly available Machine Learning (ML) models is an area of
growing significance since commercial and government applications of ML can draw on …
growing significance since commercial and government applications of ML can draw on …