Local differential privacy and its applications: A comprehensive survey

M Yang, T Guo, T Zhu, I Tjuawinata, J Zhao… - Computer Standards & …, 2023 - Elsevier
With the rapid development of low-cost consumer electronics and pervasive adoption of next
generation wireless communication technologies, a tremendous amount of data has been …

Distributed graph neural network training: A survey

Y Shao, H Li, X Gu, H Yin, Y Li, X Miao… - ACM Computing …, 2024 - dl.acm.org
Graph neural networks (GNNs) are a type of deep learning models that are trained on
graphs and have been successfully applied in various domains. Despite the effectiveness of …

Large language model unlearning

Y Yao, X Xu, Y Liu - arXiv preprint arXiv:2310.10683, 2023 - arxiv.org
We study how to perform unlearning, ie forgetting undesirable (mis) behaviors, on large
language models (LLMs). We show at least three scenarios of aligning LLMs with human …

Model sparsity can simplify machine unlearning

J Liu, P Ram, Y Yao, G Liu, Y Liu… - Advances in Neural …, 2024 - proceedings.neurips.cc
In response to recent data regulation requirements, machine unlearning (MU) has emerged
as a critical process to remove the influence of specific examples from a given model …

Toward generalist anomaly detection via in-context residual learning with few-shot sample prompts

J Zhu, G Pang - Proceedings of the IEEE/CVF Conference …, 2024 - openaccess.thecvf.com
This paper explores the problem of Generalist Anomaly Detection (GAD) aiming to train one
single detection model that can generalize to detect anomalies in diverse datasets from …

Machine unlearning: Solutions and challenges

J Xu, Z Wu, C Wang, X Jia - IEEE Transactions on Emerging …, 2024 - ieeexplore.ieee.org
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious
data, posing risks of privacy breaches, security vulnerabilities, and performance …

Knowledge unlearning for llms: Tasks, methods, and challenges

N Si, H Zhang, H Chang, W Zhang, D Qu… - arXiv preprint arXiv …, 2023 - arxiv.org
In recent years, large language models (LLMs) have spurred a new research paradigm in
natural language processing. Despite their excellent capability in knowledge-based …

Negative preference optimization: From catastrophic collapse to effective unlearning

R Zhang, L Lin, Y Bai, S Mei - arXiv preprint arXiv:2404.05868, 2024 - arxiv.org
Large Language Models (LLMs) often memorize sensitive, private, or copyrighted data
during pre-training. LLM unlearning aims to eliminate the influence of undesirable data from …

Towards safer large language models through machine unlearning

Z Liu, G Dou, Z Tan, Y Tian, M Jiang - arXiv preprint arXiv:2402.10058, 2024 - arxiv.org
The rapid advancement of Large Language Models (LLMs) has demonstrated their vast
potential across various domains, attributed to their extensive pretraining knowledge and …

A survey on federated unlearning: Challenges, methods, and future directions

Z Liu, Y Jiang, J Shen, M Peng, KY Lam… - arXiv preprint arXiv …, 2023 - arxiv.org
In recent years, the notion of``the right to be forgotten"(RTBF) has evolved into a
fundamental element of data privacy regulations, affording individuals the ability to request …