Byzantine-robust decentralized federated learning
Federated learning (FL) enables multiple clients to collaboratively train machine learning
models without revealing their private training data. In conventional FL, the system follows …
models without revealing their private training data. In conventional FL, the system follows …
Anti-Byzantine attacks enabled vehicle selection for asynchronous federated learning in vehicular edge computing
In vehicle edge computing (VEC), asynchronous federated learning (AFL) is used, where the
edge receives a local model and updates the global model, effectively reducing the global …
edge receives a local model and updates the global model, effectively reducing the global …
Poisoning federated recommender systems with fake users
Federated recommendation is a prominent use case within federated learning, yet it remains
susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are …
susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are …
RFed: Robustness-Enhanced Privacy-Preserving Federated Learning Against Poisoning Attack
Federated learning not only realizes collaborative training of models, but also effectively
maintains user privacy. However, with the widespread application of privacy-preserving …
maintains user privacy. However, with the widespread application of privacy-preserving …
Robust Federated Learning Mitigates Client-side Training Data Distribution Inference Attacks
Recent studies have revealed that federated learning (FL), once considered secure due to
clients not sharing their private data with the server, is vulnerable to attacks such as client …
clients not sharing their private data with the server, is vulnerable to attacks such as client …
Asynchronous byzantine federated learning
Federated learning (FL) enables a set of geographically distributed clients to collectively
train a model through a server. Classically, the training process is synchronous, but can be …
train a model through a server. Classically, the training process is synchronous, but can be …
PoisonedFL: Model Poisoning Attacks to Federated Learning via Multi-Round Consistency
Model poisoning attacks are critical security threats to Federated Learning (FL). Existing
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …
model poisoning attacks suffer from two key limitations: 1) they achieve suboptimal …
Poisoning Attacks on Federated Learning-based Wireless Traffic Prediction
Federated Learning (FL) offers a distributed framework to train a global control model across
multiple base stations without compromising the privacy of their local network data. This …
multiple base stations without compromising the privacy of their local network data. This …
Better safe than sorry: Constructing Byzantine-robust federated learning with synthesized trust
G Geng, T Cai, Z Yang - Electronics, 2023 - mdpi.com
Byzantine-robust federated learning empowers the central server to acquire high-end global
models amidst a restrictive set of malicious clients. The general idea of existing learning …
models amidst a restrictive set of malicious clients. The general idea of existing learning …
MODA: model ownership deprivation attack in asynchronous federated learning
Training a deep learning model from scratch requires a great deal of available labeled data,
computation resources, and expert knowledge. Thus, the time-consuming and complicated …
computation resources, and expert knowledge. Thus, the time-consuming and complicated …