作者
Keith Bonawitz, Vladimir Ivanov, Ben Kreuter, Antonio Marcedone, H Brendan McMahan, Sarvar Patel, Daniel Ramage, Aaron Segal, Karn Seth
发表日期
2016/11/14
期刊
arXiv preprint arXiv:1611.04482
简介
Secure Aggregation protocols allow a collection of mutually distrust parties, each holding a private value, to collaboratively compute the sum of those values without revealing the values themselves. We consider training a deep neural network in the Federated Learning model, using distributed stochastic gradient descent across user-held training data on mobile devices, wherein Secure Aggregation protects each user's model gradient. We design a novel, communication-efficient Secure Aggregation protocol for high-dimensional data that tolerates up to 1/3 users failing to complete the protocol. For 16-bit input values, our protocol offers 1.73x communication expansion for users and -dimensional vectors, and 1.98x expansion for users and dimensional vectors.
引用总数
2017201820192020202120222023202451228608510314368
学术搜索中的文章
K Bonawitz, V Ivanov, B Kreuter, A Marcedone… - arXiv preprint arXiv:1611.04482, 2016