PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency

Y Xie, M Fang, NZ Gong - arXiv preprint arXiv:2404.15611, 2024 - arxiv.org
Model poisoning attacks are critical security threats to Federated Learning (FL). Existing
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …

Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction

Z Zhang, M Fang, J Huang, Y Liu - arXiv preprint arXiv:2404.14389, 2024 - arxiv.org
Federated Learning (FL) offers a distributed framework to train a global control model across
multiple base stations without compromising the privacy of their local network data. This …

Byzantine-Robust Decentralized Federated Learning

M Fang, Z Zhang, P Khanduri, S Lu, Y Liu… - arXiv preprint arXiv …, 2024 - arxiv.org
Federated learning (FL) enables multiple clients to collaboratively train machine learning
models without revealing their private training data. In conventional FL, the system follows …