PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency
Model poisoning attacks are critical security threats to Federated Learning (FL). Existing
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …
Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Federated Learning (FL) offers a distributed framework to train a global control model across
multiple base stations without compromising the privacy of their local network data. This …
multiple base stations without compromising the privacy of their local network data. This …
Byzantine-Robust Decentralized Federated Learning
Federated learning (FL) enables multiple clients to collaboratively train machine learning
models without revealing their private training data. In conventional FL, the system follows …
models without revealing their private training data. In conventional FL, the system follows …