A survey on federated unlearning: Challenges, methods, and future directions
In recent years, the notion of “the right to be forgotten”(RTBF) has become a crucial aspect of
data privacy for digital trust and AI safety, requiring the provision of mechanisms that support …
data privacy for digital trust and AI safety, requiring the provision of mechanisms that support …
Byzantine machine learning: A primer
The problem of Byzantine resilience in distributed machine learning, aka Byzantine machine
learning, consists of designing distributed algorithms that can train an accurate model …
learning, consists of designing distributed algorithms that can train an accurate model …
A robust privacy-preserving federated learning model against model poisoning attacks
A Yazdinejad, A Dehghantanha… - IEEE Transactions …, 2024 - ieeexplore.ieee.org
Although federated learning offers a level of privacy by aggregating user data without direct
access, it remains inherently vulnerable to various attacks, including poisoning attacks …
access, it remains inherently vulnerable to various attacks, including poisoning attacks …
The impact of adversarial attacks on federated learning: A survey
Federated learning (FL) has emerged as a powerful machine learning technique that
enables the development of models from decentralized data sources. However, the …
enables the development of models from decentralized data sources. However, the …
A survey on ChatGPT: AI-generated contents, challenges, and solutions
With the widespread use of large artificial intelligence (AI) models such as ChatGPT, AI-
generated content (AIGC) has garnered increasing attention and is leading a paradigm shift …
generated content (AIGC) has garnered increasing attention and is leading a paradigm shift …
Fldetector: Defending federated learning against model poisoning attacks via detecting malicious clients
Federated learning (FL) is vulnerable to model poisoning attacks, in which malicious clients
corrupt the global model via sending manipulated model updates to the server. Existing …
corrupt the global model via sending manipulated model updates to the server. Existing …
ShieldFL: Mitigating model poisoning attacks in privacy-preserving federated learning
Privacy-Preserving Federated Learning (PPFL) is an emerging secure distributed learning
paradigm that aggregates user-trained local gradients into a federated model through a …
paradigm that aggregates user-trained local gradients into a federated model through a …
Privacy-preserving Byzantine-robust federated learning via blockchain systems
Federated learning enables clients to train a machine learning model jointly without sharing
their local data. However, due to the centrality of federated learning framework and the …
their local data. However, due to the centrality of federated learning framework and the …
[HTML][HTML] {FLAME}: Taming backdoors in federated learning
With the worldwide COVID-19 pandemic in 2020 and 2021 necessitating working from
home, corporate Virtual Private Networks (VPNs) have become an important item securing …
home, corporate Virtual Private Networks (VPNs) have become an important item securing …
Federated learning for generalization, robustness, fairness: A survey and benchmark
Federated learning has emerged as a promising paradigm for privacy-preserving
collaboration among different parties. Recently, with the popularity of federated learning, an …
collaboration among different parties. Recently, with the popularity of federated learning, an …