Membership inference attacks on machine learning: A survey
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …
image classification, text generation, audio recognition, and graph data analysis. However …
Differential privacy for deep and federated learning: A survey
A El Ouadrhiri, A Abdelhadi - IEEE access, 2022 - ieeexplore.ieee.org
Users' privacy is vulnerable at all stages of the deep learning process. Sensitive information
of users may be disclosed during data collection, during training, or even after releasing the …
of users may be disclosed during data collection, during training, or even after releasing the …
Advances and open problems in federated learning
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …
devices or whole organizations) collaboratively train a model under the orchestration of a …
Fedml: A research library and benchmark for federated machine learning
Federated learning (FL) is a rapidly growing research field in machine learning. However,
existing FL libraries cannot adequately support diverse algorithmic development; …
existing FL libraries cannot adequately support diverse algorithmic development; …
Federated learning for open banking
Open banking enables individual customers to own their banking data, which provides
fundamental support for the boosting of a new ecosystem of data marketplaces and financial …
fundamental support for the boosting of a new ecosystem of data marketplaces and financial …
Quantifying privacy risks of masked language models using membership inference attacks
The wide adoption and application of Masked language models~(MLMs) on sensitive data
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …
Large language model alignment: A survey
Recent years have witnessed remarkable progress made in large language models (LLMs).
Such advancements, while garnering significant attention, have concurrently elicited various …
Such advancements, while garnering significant attention, have concurrently elicited various …
Language generation models can cause harm: So what can we do about it? an actionable survey
Recent advances in the capacity of large language models to generate human-like text have
resulted in their increased adoption in user-facing settings. In parallel, these improvements …
resulted in their increased adoption in user-facing settings. In parallel, these improvements …
Static and sequential malicious attacks in the context of selective forgetting
With the growing demand for the right to be forgotten, there is an increasing need for
machine learning models to forget sensitive data and its impact. To address this, the …
machine learning models to forget sensitive data and its impact. To address this, the …
SoK: cryptographic neural-network computation
We studied 53 privacy-preserving neural-network papers in 2016-2022 based on
cryptography (without trusted processors or differential privacy), 16 of which only use …
cryptography (without trusted processors or differential privacy), 16 of which only use …