Compression of deep convolutional neural networks for fast and low power mobile applications
Although the latest high-end smartphone has powerful CPU and GPU, running deeper
convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on …
convolutional neural networks (CNNs) for complex tasks such as ImageNet classification on …
Tensor Decomposition for Model Reduction in Neural Networks: A Review [Feature]
Modern neural networks have revolutionized the fields of computer vision (CV) and Natural
Language Processing (NLP). They are widely used for solving complex CV tasks and NLP …
Language Processing (NLP). They are widely used for solving complex CV tasks and NLP …
Knowledge extraction with no observable data
Abstract Knowledge distillation is to transfer the knowledge of a large neural network into a
smaller one and has been shown to be effective especially when the amount of training data …
smaller one and has been shown to be effective especially when the amount of training data …
Efficient neural network compression
Network compression reduces the computational complexity and memory consumption of
deep neural networks by reducing the number of parameters. In SVD-based network …
deep neural networks by reducing the number of parameters. In SVD-based network …
A review of deterministic approximate inference techniques for Bayesian machine learning
S Sun - Neural Computing and Applications, 2013 - Springer
A central task of Bayesian machine learning is to infer the posterior distribution of hidden
random variables given observations and calculate expectations with respect to this …
random variables given observations and calculate expectations with respect to this …
[图书][B] Handbook of robust low-rank and sparse matrix decomposition: Applications in image and video processing
Handbook of Robust Low-Rank and Sparse Matrix Decomposition: Applications in Image
and Video Processing shows you how robust subspace learning and tracking by …
and Video Processing shows you how robust subspace learning and tracking by …
Posterior collapse of a linear latent variable model
This work identifies the existence and cause of a type of posterior collapse that frequently
occurs in the Bayesian deep learning practice. For a general linear latent variable model …
occurs in the Bayesian deep learning practice. For a general linear latent variable model …
A multiple-phenotype imputation method for genetic studies
Genetic association studies have yielded a wealth of biological discoveries. However, these
studies have mostly analyzed one trait and one SNP at a time, thus failing to capture the …
studies have mostly analyzed one trait and one SNP at a time, thus failing to capture the …
Bayesian optimization-based global optimal rank selection for compression of convolutional neural networks
Recently, convolutional neural network (CNN) compression via low-rank decomposition has
achieved remarkable performance. Finding the optimal rank is a crucial problem because …
achieved remarkable performance. Finding the optimal rank is a crucial problem because …
Towards flexible sparsity-aware modeling: Automatic tensor rank learning using the generalized hyperbolic prior
Tensor rank learning for canonical polyadic decomposition (CPD) has long been deemed as
an essential yet challenging problem. In particular, since thetensor rank controls the …
an essential yet challenging problem. In particular, since thetensor rank controls the …