Addressing sparsity in deep neural networks
IEEE Transactions on Computer-Aided Design of Integrated Circuits …, 2018•ieeexplore.ieee.org
Neural networks (NNs) have been demonstrated to be useful in a broad range of
applications, such as image recognition, automatic translation, and advertisement
recommendation. State-of-the-art NNs are known to be both computationally and memory
intensive, due to the ever-increasing deep structure, ie, multiple layers with massive neurons
and connections (ie, synapses). Sparse NNs have emerged as an effective solution to
reduce the amount of computation and memory required. Though existing NN accelerators …
applications, such as image recognition, automatic translation, and advertisement
recommendation. State-of-the-art NNs are known to be both computationally and memory
intensive, due to the ever-increasing deep structure, ie, multiple layers with massive neurons
and connections (ie, synapses). Sparse NNs have emerged as an effective solution to
reduce the amount of computation and memory required. Though existing NN accelerators …
Neural networks (NNs) have been demonstrated to be useful in a broad range of applications, such as image recognition, automatic translation, and advertisement recommendation. State-of-the-art NNs are known to be both computationally and memory intensive, due to the ever-increasing deep structure, i.e., multiple layers with massive neurons and connections (i.e., synapses). Sparse NNs have emerged as an effective solution to reduce the amount of computation and memory required. Though existing NN accelerators are able to efficiently process dense and regular networks, they cannot benefit from the reduction of synaptic weights. In this paper, we propose a novel accelerator, Cambricon-X, to exploit the sparsity and irregularity of NN models for increased efficiency. The proposed accelerator features a processing element (PE)-based architecture consisting of multiple PEs. An indexing module efficiently selects and transfers needed neurons to connected PEs with reduced bandwidth requirement, while each PE stores irregular and compressed synapses for local computation in an asynchronous fashion. With 16 PEs, our accelerator is able to achieve at most 544 GOP/s in a small form factor (6.38 mm 2 and 954 mW at 65 nm). Experimental results over a number of representative sparse networks show that our accelerator achieves, on average, speedup and energy saving against the state-of-the-art NN accelerator. We further investigate possibilities of leveraging activation sparsity and multi-issue controller, which improve the efficiency of Cambricon-X. To ease the burden of programmers, we also propose a high efficient library-based programming environment for our accelerator.
ieeexplore.ieee.org
以上显示的是最相近的搜索结果。 查看全部搜索结果