Sparse training via boosting pruning plasticity with neuroregeneration

S Liu, T Chen, X Chen, Z Atashgahi… - Advances in …, 2021 - proceedings.neurips.cc
Works on lottery ticket hypothesis (LTH) and single-shot network pruning (SNIP) have raised
a lot of attention currently on post-training pruning (iterative magnitude pruning), and before …

Low rank regularization: A review

Z Hu, F Nie, R Wang, X Li - Neural Networks, 2021 - Elsevier
Abstract Low Rank Regularization (LRR), in essence, involves introducing a low rank or
approximately low rank assumption to target we aim to learn, which has achieved great …

Low-rank compression of neural nets: Learning the rank of each layer

Y Idelbayev… - Proceedings of the IEEE …, 2020 - openaccess.thecvf.com
Neural net compression can be achieved by approximating each layer's weight matrix by a
low-rank matrix. The real difficulty in doing this is not in training the resulting neural net …

BSQ: Exploring bit-level sparsity for mixed-precision neural network quantization

H Yang, L Duan, Y Chen, H Li - arXiv preprint arXiv:2102.10462, 2021 - arxiv.org
Mixed-precision quantization can potentially achieve the optimal tradeoff between
performance and compression rate of deep neural networks, and thus, have been widely …

Trp: Trained rank pruning for efficient deep neural networks

Y Xu, Y Li, S Zhang, W Wen, B Wang, Y Qi… - arXiv preprint arXiv …, 2020 - arxiv.org
To enable DNNs on edge devices like mobile phones, low-rank approximation has been
widely adopted because of its solid theoretical rationale and efficient implementations …

Learning low-rank deep neural networks via singular vector orthogonality regularization and singular value sparsification

H Yang, M Tang, W Wen, F Yan, D Hu… - Proceedings of the …, 2020 - openaccess.thecvf.com
Modern deep neural networks (DNNs) often require high memory consumption and large
computational loads. In order to deploy DNN algorithms efficiently on edge or mobile …

An effective low-rank compression with a joint rank selection followed by a compression-friendly training

M Eo, S Kang, W Rhee - Neural Networks, 2023 - Elsevier
Low-rank compression of a neural network is one of the popular compression techniques,
where it has been known to have two main challenges. The first challenge is determining the …

Pruning by training: A novel deep neural network compression framework for image processing

G Tian, J Chen, X Zeng, Y Liu - IEEE Signal Processing Letters, 2021 - ieeexplore.ieee.org
Filter pruning for a pre-trained convolutional neural network is most normally performed
through human-made constraints or criteria such as norms, ranks, etc. Typically, the pruning …

Cstar: towards compact and structured deep neural networks with adversarial robustness

H Phan, M Yin, Y Sui, B Yuan, S Zonouz - Proceedings of the AAAI …, 2023 - ojs.aaai.org
Abstract Model compression and model defense for deep neural networks (DNNs) have
been extensively and individually studied. Considering the co-importance of model …

[图书][B] Low-power computer vision: improve the efficiency of artificial intelligence

GK Thiruvathukal, YH Lu, J Kim, Y Chen, B Chen - 2022 - books.google.com
Energy efficiency is critical for running computer vision on battery-powered systems, such as
mobile phones or UAVs (unmanned aerial vehicles, or drones). This book collects the …