Deep compression and EIE: Efficient inference engine on compressed deep neural network.
… “Learning both Weights and Connections for Efficient Neural Networks”, NIPS 2015 [2].
Han et al. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
Quantization and Huffman Coding”, Deep Learning Symposium 2015, ICLR 2016 (best
paper award) [3]. Han et al. “… Yao, Han, et.al, “Hardware-friendly convolutional neural
network with even-number filter size”, ICLR workshop 2016 …
Han et al. “Deep Compression: Compressing Deep Neural Networks with Pruning, Trained
Quantization and Huffman Coding”, Deep Learning Symposium 2015, ICLR 2016 (best
paper award) [3]. Han et al. “… Yao, Han, et.al, “Hardware-friendly convolutional neural
network with even-number filter size”, ICLR workshop 2016 …
以上显示的是最相近的搜索结果。 查看全部搜索结果