Patch diffusion: Faster and more data-efficient training of diffusion models

Z Wang, Y Jiang, H Zheng, P Wang… - Advances in neural …, 2024 - proceedings.neurips.cc
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …

[PDF][PDF] The efficiency spectrum of large language models: An algorithmic survey

T Ding, T Chen, H Zhu, J Jiang, Y Zhong… - arXiv preprint arXiv …, 2023 - researchgate.net
The rapid growth of Large Language Models (LLMs) has been a driving force in
transforming various domains, reshaping the artificial general intelligence landscape …

Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch

X Wu, S Gao, Z Zhang, Z Li, R Bao… - Proceedings of the …, 2024 - openaccess.thecvf.com
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step
processes that require domain-specific expertise making their widespread adoption …

Towards data-agnostic pruning at initialization: what makes a good sparse mask?

H Pham, S Liu, L Xiang, D Le, H Wen… - Advances in Neural …, 2024 - proceedings.neurips.cc
Pruning at initialization (PaI) aims to remove weights of neural networks before training in
pursuit of training efficiency besides the inference. While off-the-shelf PaI methods manage …

Lorashear: Efficient large language model structured pruning and knowledge recovery

T Chen, T Ding, B Yadav, I Zharkov, L Liang - arXiv preprint arXiv …, 2023 - arxiv.org
Large Language Models (LLMs) have transformed the landscape of artificial intelligence,
while their enormous size presents significant challenges in terms of computational costs …

One less reason for filter pruning: Gaining free adversarial robustness with structured grouped kernel pruning

SH Zhong, Z You, J Zhang, S Zhao… - Advances in neural …, 2023 - proceedings.neurips.cc
Densely structured pruning methods utilizing simple pruning heuristics can deliver
immediate compression and acceleration benefits with acceptable benign performances …

Enhanced sparsification via stimulative training

S Tang, W Lin, H Ye, P Ye, C Yu, B Li… - European Conference on …, 2025 - Springer
Sparsification-based pruning has been an important category in model compression.
Existing methods commonly set sparsity-inducing penalty terms to suppress the importance …

Isomorphic Pruning for Vision Models

G Fang, X Ma, MB Mi, X Wang - European Conference on Computer …, 2025 - Springer
Structured pruning reduces the computational overhead of deep neural networks by
removing redundant sub-structures. However, assessing the relative importance of different …

Hardware-aware approach to deep neural network optimization

H Li, L Meng - Neurocomputing, 2023 - Elsevier
Deep neural networks (DNNs) have been a pivotal technology in a myriad of fields, boasting
remarkable achievements. Nevertheless, their substantial workload and inherent …

Edge-Cloud Collaborative UAV Object Detection: Edge-Embedded Lightweight Algorithm Design and Task Offloading Using Fuzzy Neural Network

Y Yuan, S Gao, Z Zhang, W Wang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
With the rapid development of artificial intelligence and Unmanned Aerial Vehicle (UAV)
technology, AI-based UAVs are increasingly utilized in various industrial and civilian …