Byzantine-robust decentralized federated learning

M Fang, Z Zhang, Hairi, P Khanduri, J Liu, S Lu… - Proceedings of the …, 2024 - dl.acm.org
Federated learning (FL) enables multiple clients to collaboratively train machine learning
models without revealing their private training data. In conventional FL, the system follows …

Anti-Byzantine attacks enabled vehicle selection for asynchronous federated learning in vehicular edge computing

Z Cui, X Xiao, W Qiong, F Pingyi, F Qiang… - China …, 2024 - ieeexplore.ieee.org
In vehicle edge computing (VEC), asynchronous federated learning (AFL) is used, where the
edge receives a local model and updates the global model, effectively reducing the global …

Poisoning federated recommender systems with fake users

M Yin, Y Xu, M Fang, NZ Gong - Proceedings of the ACM on Web …, 2024 - dl.acm.org
Federated recommendation is a prominent use case within federated learning, yet it remains
susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are …

RFed: Robustness-Enhanced Privacy-Preserving Federated Learning Against Poisoning Attack

Y Miao, X Yan, X Li, S Xu, X Liu, H Li… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
Federated learning not only realizes collaborative training of models, but also effectively
maintains user privacy. However, with the widespread application of privacy-preserving …

Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks

Y Xu, M Yin, M Fang, NZ Gong - Companion Proceedings of the ACM on …, 2024 - dl.acm.org
Recent studies have revealed that federated learning (FL), once considered secure due to
clients not sharing their private data with the server, is vulnerable to attacks such as client …

Asynchronous byzantine federated learning

B Cox, A Mălan, LY Chen, J Decouchant - arXiv preprint arXiv:2406.01438, 2024 - arxiv.org
Federated learning (FL) enables a set of geographically distributed clients to collectively
train a model through a server. Classically, the training process is synchronous, but can be …

PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency

Y Xie, M Fang, NZ Gong - arXiv preprint arXiv:2404.15611, 2024 - arxiv.org
Model poisoning attacks are critical security threats to Federated Learning (FL). Existing
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …

Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction

Z Zhang, M Fang, J Huang, Y Liu - arXiv preprint arXiv:2404.14389, 2024 - arxiv.org
Federated Learning (FL) offers a distributed framework to train a global control model across
multiple base stations without compromising the privacy of their local network data. This …

Better safe than sorry: Constructing Byzantine-robust federated learning with synthesized trust

G Geng, T Cai, Z Yang - Electronics, 2023 - mdpi.com
Byzantine-robust federated learning empowers the central server to acquire high-end global
models amidst a restrictive set of malicious clients. The general idea of existing learning …

MODA: model ownership deprivation attack in asynchronous federated learning

X Zhang, S Lin, C Chen, X Chen - IEEE Transactions on …, 2023 - ieeexplore.ieee.org
Training a deep learning model from scratch requires a great deal of available labeled data,
computation resources, and expert knowledge. Thus, the time-consuming and complicated …