Binary-decomposed DCNN for accelerating computation and compressing model without retraining

R Kamiya, T Yamashita, M Ambai… - Proceedings of the …, 2017 - openaccess.thecvf.com
R Kamiya, T Yamashita, M Ambai, I Sato, Y Yamauchi, H Fujiyoshi
Proceedings of the IEEE International Conference on Computer …, 2017openaccess.thecvf.com
The ConvNet has a large number of parameters. This is resulting in increasingly long
computation times and large model sizes. To embedding mobile devices, the model size
must be compressed and computation must be accelerated. This paper proposes Binary-
decomposed DCNN, which resolves these issues without the need for retraining. Our
method replaces real-valued inner-product computations with binary inner-product
computations in existing network models to accelerate computation of inference and …
Abstract
The ConvNet has a large number of parameters. This is resulting in increasingly long computation times and large model sizes. To embedding mobile devices, the model size must be compressed and computation must be accelerated. This paper proposes Binary-decomposed DCNN, which resolves these issues without the need for retraining. Our method replaces real-valued inner-product computations with binary inner-product computations in existing network models to accelerate computation of inference and decrease model size without the need for retraining. Binary computations can be done at high speed using logical operators such as XOR and AND, together with bit counting. In tests using AlexNet with the ImageNet, speed increased by a factor of 1.79, model is compressed by approximately 80%, and increase in error rate was limited to 1.20%. With VGG-16, speed increased by a factor of 2.07, model sizes decreased by 81%, and error increased by only 2.16%.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果