作者
Xiang Li, Kaixuan Huang, Wenhao Yang, Shusen Wang, Zhihua Zhang
发表日期
2019/7/4
期刊
arXiv preprint arXiv:1907.02189
简介
Federated learning enables a large amount of edge computing devices to jointly learn a model without data sharing. As a leading algorithm in this setting, Federated Averaging (\texttt{FedAvg}) runs Stochastic Gradient Descent (SGD) in parallel on a small subset of the total devices and averages the sequences only once in a while. Despite its simplicity, it lacks theoretical guarantees under realistic settings. In this paper, we analyze the convergence of \texttt{FedAvg} on non-iid data and establish a convergence rate of for strongly convex and smooth problems, where is the number of SGDs. Importantly, our bound demonstrates a trade-off between communication-efficiency and convergence rate. As user devices may be disconnected from the server, we relax the assumption of full device participation to partial device participation and study different averaging schemes; low device participation rate can be achieved without severely slowing down the learning. Our results indicate that heterogeneity of data slows down the convergence, which matches empirical observations. Furthermore, we provide a necessary condition for \texttt{FedAvg} on non-iid data: the learning rate must decay, even if full-gradient is used; otherwise, the solution will be away from the optimal.
引用总数
学术搜索中的文章
X Li, K Huang, W Yang, S Wang, Z Zhang - arXiv preprint arXiv:1907.02189, 2019