Blockchain assisted decentralized federated learning (blade-fl) with lazy clients
Federated learning (FL), as a distributed machine learning approach, has drawn a great
amount of attention in recent years. FL shows an inherent advantage in privacy preservation,
since users' raw data are processed locally. However, it relies on a centralized server to
perform model aggregation. Therefore, FL is vulnerable to server malfunctions and external
attacks. In this paper, we propose a novel framework by integrating blockchain into FL,
namely, blockchain assisted decentralized federated learning (BLADE-FL), to enhance the …
amount of attention in recent years. FL shows an inherent advantage in privacy preservation,
since users' raw data are processed locally. However, it relies on a centralized server to
perform model aggregation. Therefore, FL is vulnerable to server malfunctions and external
attacks. In this paper, we propose a novel framework by integrating blockchain into FL,
namely, blockchain assisted decentralized federated learning (BLADE-FL), to enhance the …
Federated learning (FL), as a distributed machine learning approach, has drawn a great amount of attention in recent years. FL shows an inherent advantage in privacy preservation, since users' raw data are processed locally. However, it relies on a centralized server to perform model aggregation. Therefore, FL is vulnerable to server malfunctions and external attacks. In this paper, we propose a novel framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL), to enhance the security of FL. The proposed BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning. However, it gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors. To be specific, we first develop a convergence bound of the loss function with the presence of lazy clients and prove that it is convex with respect to the total number of generated blocks . Then, we solve the convex problem by optimizing to minimize the loss function. Furthermore, we discover the relationship between the optimal , the number of lazy clients, and the power of artificial noises used by lazy clients. We conduct extensive experiments to evaluate the performance of the proposed framework using the MNIST and Fashion-MNIST datasets. Our analytical results are shown to be consistent with the experimental results. In addition, the derived optimal achieves the minimum value of loss function, and in turn the optimal accuracy performance.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果