How to dp-fy ml: A practical guide to machine learning with differential privacy

N Ponomareva, H Hazimeh, A Kurakin, Z Xu… - Journal of Artificial …, 2023 - jair.org
Abstract Machine Learning (ML) models are ubiquitous in real-world applications and are a
constant focus of research. Modern ML models have become more complex, deeper, and …

Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Membership inference attacks from first principles

N Carlini, S Chien, M Nasr, S Song… - … IEEE Symposium on …, 2022 - ieeexplore.ieee.org
A membership inference attack allows an adversary to query a trained machine learning
model to predict whether or not a particular example was contained in the model's training …

Enhanced membership inference attacks against machine learning models

J Ye, A Maddi, SK Murakonda… - Proceedings of the …, 2022 - dl.acm.org
How much does a machine learning algorithm leak about its training data, and why?
Membership inference attacks are used as an auditing tool to quantify this leakage. In this …

Label-only membership inference attacks

CA Choquette-Choo, F Tramer… - International …, 2021 - proceedings.mlr.press
Membership inference is one of the simplest privacy threats faced by machine learning
models that are trained on private sensitive data. In this attack, an adversary infers whether a …

[HTML][HTML] The future of digital health with federated learning

N Rieke, J Hancox, W Li, F Milletari, HR Roth… - NPJ digital …, 2020 - nature.com
Data-driven machine learning (ML) has emerged as a promising approach for building
accurate and robust statistical models from medical data, which is collected in huge volumes …

Are diffusion models vulnerable to membership inference attacks?

J Duan, F Kong, S Wang, X Shi… - … Conference on Machine …, 2023 - proceedings.mlr.press
Diffusion-based generative models have shown great potential for image synthesis, but
there is a lack of research on the security and privacy risks they may pose. In this paper, we …

Privacy for free: How does dataset condensation help privacy?

T Dong, B Zhao, L Lyu - International Conference on …, 2022 - proceedings.mlr.press
To prevent unintentional data leakage, research community has resorted to data generators
that can produce differentially private data for model training. However, for the sake of the …

PPFL: Privacy-preserving federated learning with trusted execution environments

F Mo, H Haddadi, K Katevas, E Marin… - Proceedings of the 19th …, 2021 - dl.acm.org
We propose and implement a Privacy-preserving Federated Learning (PPFL) framework for
mobile systems to limit privacy leakages in federated learning. Leveraging the widespread …

A survey of privacy attacks in machine learning

M Rigaki, S Garcia - ACM Computing Surveys, 2023 - dl.acm.org
As machine learning becomes more widely used, the need to study its implications in
security and privacy becomes more urgent. Although the body of work in privacy has been …