Anonymization: The imperfect science of using data while preserving privacy
Information about us, our actions, and our preferences is created at scale through surveys or
scientific studies or as a result of our interaction with digital devices such as smartphones …
scientific studies or as a result of our interaction with digital devices such as smartphones …
Measuring forgetting of memorized training examples
Machine learning models exhibit two seemingly contradictory phenomena: training data
memorization, and various forms of forgetting. In memorization, models overfit specific …
memorization, and various forms of forgetting. In memorization, models overfit specific …
Membership inference attacks against synthetic data through overfitting detection
Data is the foundation of most science. Unfortunately, sharing data can be obstructed by the
risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a …
risk of violating data privacy, impeding research in fields like healthcare. Synthetic data is a …
SoK: Let the privacy games begin! A unified treatment of data inference privacy in machine learning
Deploying machine learning models in production may allow adversaries to infer sensitive
information about training data. There is a vast literature analyzing different types of …
information about training data. There is a vast literature analyzing different types of …
Demystifying uneven vulnerability of link stealing attacks against graph neural networks
While graph neural networks (GNNs) dominate the state-of-the-art for exploring graphs in
real-world applications, they have been shown to be vulnerable to a growing number of …
real-world applications, they have been shown to be vulnerable to a growing number of …
Bayesian estimation of differential privacy
S Zanella-Beguelin, L Wutschitz… - International …, 2023 - proceedings.mlr.press
Abstract Algorithms such as Differentially Private SGD enable training machine learning
models with formal privacy guarantees. However, because these guarantees hold with …
models with formal privacy guarantees. However, because these guarantees hold with …
Formalizing and estimating distribution inference risks
Distribution inference, sometimes called property inference, infers statistical properties about
a training set from access to a model trained on that data. Distribution inference attacks can …
a training set from access to a model trained on that data. Distribution inference attacks can …
[HTML][HTML] A survey on membership inference attacks and defenses in Machine Learning
Membership inference (MI) attacks mainly aim to infer whether a data record was used to
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
train a target model or not. Due to the serious privacy risks, MI attacks have been attracting a …
Unlocking accuracy and fairness in differentially private image classification
Privacy-preserving machine learning aims to train models on private data without leaking
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
sensitive information. Differential privacy (DP) is considered the gold standard framework for …
How to combine membership-inference attacks on multiple updated machine learning models
A large body of research has shown that machine learning models are vulnerable to
membership inference (MI) attacks that violate the privacy of the participants in the training …
membership inference (MI) attacks that violate the privacy of the participants in the training …