Membership inference attacks on machine learning: A survey

H Hu, Z Salcic, L Sun, G Dobbie, PS Yu… - ACM Computing Surveys …, 2022 - dl.acm.org
Machine learning (ML) models have been widely applied to various applications, including
image classification, text generation, audio recognition, and graph data analysis. However …

Differential privacy for deep and federated learning: A survey

A El Ouadrhiri, A Abdelhadi - IEEE access, 2022 - ieeexplore.ieee.org
Users' privacy is vulnerable at all stages of the deep learning process. Sensitive information
of users may be disclosed during data collection, during training, or even after releasing the …

Advances and open problems in federated learning

P Kairouz, HB McMahan, B Avent… - … and trends® in …, 2021 - nowpublishers.com
Federated learning (FL) is a machine learning setting where many clients (eg, mobile
devices or whole organizations) collaboratively train a model under the orchestration of a …

Fedml: A research library and benchmark for federated machine learning

C He, S Li, J So, X Zeng, M Zhang, H Wang… - arXiv preprint arXiv …, 2020 - arxiv.org
Federated learning (FL) is a rapidly growing research field in machine learning. However,
existing FL libraries cannot adequately support diverse algorithmic development; …

Federated learning for open banking

G Long, Y Tan, J Jiang, C Zhang - Federated learning: privacy and …, 2020 - Springer
Open banking enables individual customers to own their banking data, which provides
fundamental support for the boosting of a new ecosystem of data marketplaces and financial …

Quantifying privacy risks of masked language models using membership inference attacks

F Mireshghallah, K Goyal, A Uniyal… - arXiv preprint arXiv …, 2022 - arxiv.org
The wide adoption and application of Masked language models~(MLMs) on sensitive data
(from legal to medical) necessitates a thorough quantitative investigation into their privacy …

Large language model alignment: A survey

T Shen, R Jin, Y Huang, C Liu, W Dong, Z Guo… - arXiv preprint arXiv …, 2023 - arxiv.org
Recent years have witnessed remarkable progress made in large language models (LLMs).
Such advancements, while garnering significant attention, have concurrently elicited various …

Language generation models can cause harm: So what can we do about it? an actionable survey

S Kumar, V Balachandran, L Njoo… - arXiv preprint arXiv …, 2022 - arxiv.org
Recent advances in the capacity of large language models to generate human-like text have
resulted in their increased adoption in user-facing settings. In parallel, these improvements …

Static and sequential malicious attacks in the context of selective forgetting

C Zhao, W Qian, R Ying, M Huai - Advances in Neural …, 2023 - proceedings.neurips.cc
With the growing demand for the right to be forgotten, there is an increasing need for
machine learning models to forget sensitive data and its impact. To address this, the …

SoK: cryptographic neural-network computation

LKL Ng, SSM Chow - 2023 IEEE Symposium on Security and …, 2023 - ieeexplore.ieee.org
We studied 53 privacy-preserving neural-network papers in 2016-2022 based on
cryptography (without trusted processors or differential privacy), 16 of which only use …