Compression of deep convolutional neural networks for fast and low power mobile applications

YD Kim, E Park, S Yoo, T Choi, L Yang… - arXiv preprint arXiv …, 2015 - arxiv.org
Although the latest high-end smartphone has powerful CPU and GPU, running deeper
convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on …

Tensor Decomposition for Model Reduction in Neural Networks: A Review [Feature]

X Liu, KK Parhi - IEEE Circuits and Systems Magazine, 2023 - ieeexplore.ieee.org
Modern neural networks have revolutionized the fields of computer vision (CV) and Natural
Language Processing (NLP). They are widely used for solving complex CV tasks and NLP …

Knowledge extraction with no observable data

J Yoo, M Cho, T Kim, U Kang - Advances in Neural …, 2019 - proceedings.neurips.cc
Abstract Knowledge distillation is to transfer the knowledge of a large neural network into a
smaller one and has been shown to be effective especially when the amount of training data …

Efficient neural network compression

H Kim, MUK Khan, CM Kyung - Proceedings of the IEEE …, 2019 - openaccess.thecvf.com
Network compression reduces the computational complexity and memory consumption of
deep neural networks by reducing the number of parameters. In SVD-based network …

A review of deterministic approximate inference techniques for Bayesian machine learning

S Sun - Neural Computing and Applications, 2013 - Springer
A central task of Bayesian machine learning is to infer the posterior distribution of hidden
random variables given observations and calculate expectations with respect to this …

[图书][B] Handbook of robust low-rank and sparse matrix decomposition: Applications in image and video processing

T Bouwmans, NS Aybat, E Zahzah - 2016 - books.google.com
Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image
and Video Processing shows you how robust subspace learning and tracking by …

Posterior collapse of a linear latent variable model

Z Wang, L Ziyin - Advances in Neural Information …, 2022 - proceedings.neurips.cc
This work identifies the existence and cause of a type of posterior collapse that frequently
occurs in the Bayesian deep learning practice. For a general linear latent variable model …

A multiple-phenotype imputation method for genetic studies

A Dahl, V Iotchkova, A Baud, Å Johansson… - Nature …, 2016 - nature.com
Genetic association studies have yielded a wealth of biological discoveries. However, these
studies have mostly analyzed one trait and one SNP at a time, thus failing to capture the …

Bayesian optimization-based global optimal rank selection for compression of convolutional neural networks

T Kim, J Lee, Y Choe - IEEE Access, 2020 - ieeexplore.ieee.org
Recently, convolutional neural network (CNN) compression via low-rank decomposition has
achieved remarkable performance. Finding the optimal rank is a crucial problem because …

Towards flexible sparsity-aware modeling: Automatic tensor rank learning using the generalized hyperbolic prior

L Cheng, Z Chen, Q Shi, YC Wu… - IEEE Transactions on …, 2022 - ieeexplore.ieee.org
Tensor rank learning for canonical polyadic decomposition (CPD) has long been deemed as
an essential yet challenging problem. In particular, since thetensor rank controls the …