XOR-Net: An efficient computation pipeline for binary neural network inference on edge devices S Zhu, LHK Duong, W Liu 2020 IEEE 26th international conference on parallel and distributed systems …, 2020 | 24 | 2020 |
EDLAB: A benchmark for edge deep learning accelerators H Kong, S Huai, D Liu, L Zhang, H Chen, S Zhu, S Li, W Liu, M Rastogi, ... IEEE Design and Test, 2021 | 17 | 2021 |
TAB: Unified and optimized ternary, binary, and mixed-precision neural network inference on the edge S Zhu, LHK Duong, W Liu ACM Transactions on Embedded Computing Systems (TECS) 21 (5), 1-26, 2022 | 9 | 2022 |
FAT: An in-memory accelerator with fast addition for ternary weight neural networks S Zhu, LHK Duong, H Chen, D Liu, W Liu IEEE Transactions on Computer-Aided Design of Integrated Circuits and …, 2022 | 5 | 2022 |
imad: An in-memory accelerator for addernet with efficient 8-bit addition and subtraction operations S Zhu, S Li, W Liu Proceedings of the Great Lakes Symposium on VLSI 2022, 65-70, 2022 | 5 | 2022 |
Parallel multipath transmission for burst traffic optimization in point-to-point NoCs H Chen, Z Zhang, P Chen, S Zhu, W Liu Proceedings of the 2021 on Great Lakes Symposium on VLSI, 289-294, 2021 | 2 | 2021 |
An Efficient Sparse LSTM Accelerator on Embedded FPGAs with Bandwidth-Oriented Pruning S Li, S Zhu, X Luo, T Luo, W Liu 2023 33rd International Conference on Field-Programmable Logic and …, 2023 | | 2023 |
iMAT: Energy-Efficient In-Memory Acceleration for Ternary Neural Networks With Sparse Dot Product S Zhu, S Huai, G Xiong, W Liu 2023 IEEE/ACM International Symposium on Low Power Electronics and Design …, 2023 | | 2023 |
Deep learning acceleration: from quantization to in-memory computing S Zhu Nanyang Technological University, 2022 | | 2022 |
Cross-filter compression for CNN inference acceleration F Lyu, S Zhu, W Liu arXiv preprint arXiv:2005.09034, 2020 | | 2020 |