Scalable smartphone cluster for deep learning

B Na, J Jang, S Park, S Kim, J Kim, MS Jeong… - arXiv preprint arXiv …, 2021 - arxiv.org
B Na, J Jang, S Park, S Kim, J Kim, MS Jeong, KC Kim, S Heo, Y Kim, S Yoon
arXiv preprint arXiv:2110.12172, 2021arxiv.org
Various deep learning applications on smartphones have been rapidly rising, but training
deep neural networks (DNNs) has too large computational burden to be executed on a
single smartphone. A portable cluster, which connects smartphones with a wireless network
and supports parallel computation using them, can be a potential approach to resolve the
issue. However, by our findings, the limitations of wireless communication restrict the cluster
size to up to 30 smartphones. Such small-scale clusters have insufficient computational …
Various deep learning applications on smartphones have been rapidly rising, but training deep neural networks (DNNs) has too large computational burden to be executed on a single smartphone. A portable cluster, which connects smartphones with a wireless network and supports parallel computation using them, can be a potential approach to resolve the issue. However, by our findings, the limitations of wireless communication restrict the cluster size to up to 30 smartphones. Such small-scale clusters have insufficient computational power to train DNNs from scratch. In this paper, we propose a scalable smartphone cluster enabling deep learning training by removing the portability to increase its computational efficiency. The cluster connects 138 Galaxy S10+ devices with a wired network using Ethernet. We implemented large-batch synchronous training of DNNs based on Caffe, a deep learning library. The smartphone cluster yielded 90% of the speed of a P100 when training ResNet-50, and approximately 43x speed-up of a V100 when training MobileNet-v1.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果