How to dp-fy ml: A practical guide to machine learning with differential privacy
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …
constant focus of research. Modern ML models have become more complex, deeper, and …
Membership inference attacks on machine learning: A survey
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …
image classification, text generation, audio recognition, and graph data analysis. However …
Membership inference attacks from first principles
A membership inference attack allows an adversary to query a trained machine learning
model to predict whether or not a particular example was contained in the model's training …
model to predict whether or not a particular example was contained in the model's training …
Enhanced membership inference attacks against machine learning models
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …
Label-only membership inference attacks
CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …
models that are trained on private sensitive data. In this attack, an adversary infers whether a …
[HTML][HTML] The future of digital health with federated learning
Data-driven machine learning (ML) has emerged as a promising approach for building
accurate and robust statistical models from medical data, which is collected in huge volumes …
accurate and robust statistical models from medical data, which is collected in huge volumes …
Are diffusion models vulnerable to membership inference attacks?
Diffusion-based generative models have shown great potential for image synthesis, but
there is a lack of research on the security and privacy risks they may pose. In this paper, we …
there is a lack of research on the security and privacy risks they may pose. In this paper, we …
Privacy for free: How does dataset condensation help privacy?
To prevent unintentional data leakage, research community has resorted to data generators
that can produce differentially private data for model training. However, for the sake of the …
that can produce differentially private data for model training. However, for the sake of the …
PPFL: Privacy-preserving federated learning with trusted execution environments
We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for
mobile systems to limit privacy leakages in federated learning. Leveraging the widespread …
mobile systems to limit privacy leakages in federated learning. Leveraging the widespread …
A survey of privacy attacks in machine learning
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …
security and privacy becomes more urgent. Although the body of work in privacy has been …