Deep neural network pruning using persistent homology

S Watanabe, H Yamana - 2020 IEEE Third International …, 2020 - ieeexplore.ieee.org
2020 IEEE Third International Conference on Artificial …, 2020ieeexplore.ieee.org
Deep neural networks (DNNs) have improved the performance of artificial intelligence
systems in various fields including image analysis, speech recognition, and text
classification. However, the consumption of enormous computation resources prevents
DNNs from operating on small computers such as edge sensors and handheld devices.
Network pruning (NP), which removes parameters from trained DNNs, is one of the
prominent methods of reducing the resource consumption of DNNs. In this paper, we …
Deep neural networks (DNNs) have improved the performance of artificial intelligence systems in various fields including image analysis, speech recognition, and text classification. However, the consumption of enormous computation resources prevents DNNs from operating on small computers such as edge sensors and handheld devices. Network pruning (NP), which removes parameters from trained DNNs, is one of the prominent methods of reducing the resource consumption of DNNs. In this paper, we propose a novel method of NP, hereafter referred to as PHPM, using persistent homology (PH). PH investigates the inner representation of knowledge in DNNs, and PHPM utilizes the investigation in NP to improve the efficiency of pruning. PHPM prunes DNNs in ascending order of magnitudes of the combinational effects among neurons, which are calculated using the one-dimensional PH, to prevent the deterioration of the accuracy. We compared PHPM with global magnitude pruning method (GMP), which is one of the common baselines to evaluate pruning methods. Evaluation results show that the classification accuracy of DNNs pruned by PHPM outperforms that pruned by GMP.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果