Membership inference attacks on machine learning: A survey
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …
image classification, text generation, audio recognition, and graph data analysis. However …
When machine learning meets privacy: A survey and outlook
The newly emerged machine learning (eg, deep learning) methods have become a strong
driving force to revolutionize a wide range of industries, such as smart healthcare, financial …
driving force to revolutionize a wide range of industries, such as smart healthcare, financial …
Extracting training data from large language models
It has become common to publish large (billion parameter) language models that have been
trained on private datasets. This paper demonstrates that in such settings, an adversary can …
trained on private datasets. This paper demonstrates that in such settings, an adversary can …
Reconstructing training data from trained neural networks
Understanding to what extent neural networks memorize training data is an intriguing
question with practical and theoretical implications. In this paper we show that in some …
question with practical and theoretical implications. In this paper we show that in some …
Dataset distillation using neural feature regression
Dataset distillation aims to learn a small synthetic dataset that preserves most of the
information from the original dataset. Dataset distillation can be formulated as a bi-level …
information from the original dataset. Dataset distillation can be formulated as a bi-level …
Enhanced membership inference attacks against machine learning models
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …
Label-only membership inference attacks
CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …
models that are trained on private sensitive data. In this attack, an adversary infers whether a …
Detecting pretraining data from large language models
Although large language models (LLMs) are widely deployed, the data used to train them is
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
rarely disclosed. Given the incredible scale of this data, up to trillions of tokens, it is all but …
Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning
Deep neural networks are susceptible to various inference attacks as they remember
information about their training data. We design white-box inference attacks to perform a …
information about their training data. We design white-box inference attacks to perform a …
Privacy and security issues in deep learning: A survey
Deep Learning (DL) algorithms based on artificial neural networks have achieved
remarkable success and are being extensively applied in a variety of application domains …
remarkable success and are being extensively applied in a variety of application domains …