Fault-tolerant federated reinforcement learning with theoretical guarantee

X Fan, Y Ma, Z Dai, W Jing, C Tan… - Advances in Neural …, 2021 - proceedings.neurips.cc
The growing literature of Federated Learning (FL) has recently inspired Federated
Reinforcement Learning (FRL) to encourage multiple agents to federatively build a better …

Byzantine-resilient decentralized stochastic optimization with robust aggregation rules

Z Wu, T Chen, Q Ling - IEEE transactions on signal processing, 2023 - ieeexplore.ieee.org
This article focuses on decentralized stochastic optimization in the presence of Byzantine
attacks. During the optimization process, an unknown number of malfunctioning or malicious …

Byzantine-robust distributed online learning: Taming adversarial participants in an adversarial environment

X Dong, Z Wu, Q Ling, Z Tian - IEEE Transactions on Signal …, 2023 - ieeexplore.ieee.org
This paper studies distributed online learning under Byzantine attacks. The performance of
an online learning algorithm is often characterized by (adversarial) regret, which evaluates …

Byzantine-robust variance-reduced federated learning over distributed non-iid data

J Peng, Z Wu, Q Ling, T Chen - Information Sciences, 2022 - Elsevier
We consider the federated learning problem where data on workers are not independent
and identically distributed (iid). During the learning process, an unknown number of …

Broadcast: Reducing both stochastic and compression noise to robustify communication-efficient federated learning

H Zhu, Q Ling - arXiv preprint arXiv:2104.06685, 2021 - arxiv.org
Communication between workers and the master node to collect local stochastic gradients is
a key bottleneck in a large-scale federated learning system. Various recent works have …

Byzantine-robust distributed learning with compression

H Zhu, Q Ling - IEEE Transactions on Signal and Information …, 2023 - ieeexplore.ieee.org
Communication between workers and the master node to collect local stochastic gradients is
a key bottleneck in a large-scale distributed learning system. Various recent works have …

Variance reduction-boosted Byzantine robustness in decentralized stochastic optimization

J Peng, W Li, Q Ling - ICASSP 2022-2022 IEEE International …, 2022 - ieeexplore.ieee.org
We consider the Byzantine-robust decentralized stochastic optimization problem, where
every agent periodically communicates with its neighbors to exchange the local models, and …

Byzantine-robust decentralized stochastic optimization with stochastic gradient noise-independent learning error

J Peng, W Li, Q Ling - arXiv preprint arXiv:2308.05292, 2023 - arxiv.org
This paper studies Byzantine-robust stochastic optimization over a decentralized network,
where every agent periodically communicates with its neighbors to exchange local models …

Distributed online learning with adversarial participants in an adversarial environment

X Dong, Z Wu, Q Ling, Z Tian - ICASSP 2023-2023 IEEE …, 2023 - ieeexplore.ieee.org
This paper studies distributed online learning under Byzantine attacks. The performance of
an online learning algorithm is characterized by (adversarial) regret, and a sublinear bound …

Byzantine-robust decentralized stochastic optimization with stochastic gradient noise-independent learning error

J Peng, W Li, Q Ling - Signal Processing, 2024 - Elsevier
This paper studies Byzantine-robust stochastic optimization over a decentralized network,
where every agent periodically communicates with its neighbors to exchange local models …