A survey on deep neural network pruning: Taxonomy, comparison, analysis, and recommendations

H Cheng, M Zhang, JQ Shi - IEEE Transactions on Pattern …, 2024 - ieeexplore.ieee.org
Modern deep neural networks, particularly recent large language models, come with
massive model sizes that require significant computational and storage resources. To …

Structured pruning for deep convolutional neural networks: A survey

Y He, L Xiao - IEEE transactions on pattern analysis and …, 2023 - ieeexplore.ieee.org
The remarkable performance of deep Convolutional neural networks (CNNs) is generally
attributed to their deeper and wider architectures, which can come with significant …

Generating videos with dynamics-aware implicit generative adversarial networks

S Yu, J Tack, S Mo, H Kim, J Kim, JW Ha… - arXiv preprint arXiv …, 2022 - arxiv.org
In the deep learning era, long video generation of high-quality still remains challenging due
to the spatio-temporal complexity and continuity of videos. Existing prior works have …

Wavelet knowledge distillation: Towards efficient image-to-image translation

L Zhang, X Chen, X Tu, P Wan… - Proceedings of the …, 2022 - openaccess.thecvf.com
Remarkable achievements have been attained with Generative Adversarial Networks
(GANs) in image-to-image translation. However, due to a tremendous amount of parameters …

Gan compression: Efficient architectures for interactive conditional gans

M Li, J Lin, Y Ding, Z Liu, JY Zhu… - Proceedings of the …, 2020 - openaccess.thecvf.com
Abstract Conditional Generative Adversarial Networks (cGANs) have enabled controllable
image synthesis for many computer vision and graphics applications. However, recent …

Infinitenature-zero: Learning perpetual view generation of natural scenes from single images

Z Li, Q Wang, N Snavely, A Kanazawa - European Conference on …, 2022 - Springer
We present a method for learning to generate unbounded flythrough videos of natural
scenes starting from a single view. This capability is learned from a collection of single …

Mi-gan: A simple baseline for image inpainting on mobile devices

A Sargsyan, S Navasardyan, X Xu… - Proceedings of the …, 2023 - openaccess.thecvf.com
In recent years, many deep learning based image inpainting methods have been developed
by the research community. Some of those methods have shown impressive image …

Persistent nature: A generative model of unbounded 3d worlds

L Chai, R Tucker, Z Li, P Isola… - Proceedings of the …, 2023 - openaccess.thecvf.com
Despite increasingly realistic image quality, recent 3D image generative models often
operate on 3D volumes of fixed extent with limited camera motions. We investigate the task …

Efficient spatially sparse inference for conditional gans and diffusion models

M Li, J Lin, C Meng, S Ermon… - Advances in neural …, 2022 - proceedings.neurips.cc
During image editing, existing deep generative models tend to re-synthesize the entire
output from scratch, including the unedited regions. This leads to a significant waste of …

On architectural compression of text-to-image diffusion models

BK Kim, HK Song, T Castells, S Choi - 2023 - openreview.net
Exceptional text-to-image (T2I) generation results of Stable Diffusion models (SDMs) come
with substantial computational demands. To resolve this issue, recent research on efficient …