Patch diffusion: Faster and more data-efficient training of diffusion models
Diffusion models are powerful, but they require a lot of time and data to train. We propose
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
Patch Diffusion, a generic patch-wise training framework, to significantly reduce the training …
[PDF][PDF] The efficiency spectrum of large language models: An algorithmic survey
The rapid growth of Large Language Models (LLMs) has been a driving force in
transforming various domains, reshaping the artificial general intelligence landscape …
transforming various domains, reshaping the artificial general intelligence landscape …
Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch
Current techniques for deep neural network (DNN) pruning often involve intricate multi-step
processes that require domain-specific expertise making their widespread adoption …
processes that require domain-specific expertise making their widespread adoption …
Towards data-agnostic pruning at initialization: what makes a good sparse mask?
Pruning at initialization (PaI) aims to remove weights of neural networks before training in
pursuit of training efficiency besides the inference. While off-the-shelf PaI methods manage …
pursuit of training efficiency besides the inference. While off-the-shelf PaI methods manage …
Lorashear: Efficient large language model structured pruning and knowledge recovery
Large Language Models (LLMs) have transformed the landscape of artificial intelligence,
while their enormous size presents significant challenges in terms of computational costs …
while their enormous size presents significant challenges in terms of computational costs …
One less reason for filter pruning: Gaining free adversarial robustness with structured grouped kernel pruning
Densely structured pruning methods utilizing simple pruning heuristics can deliver
immediate compression and acceleration benefits with acceptable benign performances …
immediate compression and acceleration benefits with acceptable benign performances …
Enhanced sparsification via stimulative training
Sparsification-based pruning has been an important category in model compression.
Existing methods commonly set sparsity-inducing penalty terms to suppress the importance …
Existing methods commonly set sparsity-inducing penalty terms to suppress the importance …
Isomorphic Pruning for Vision Models
Structured pruning reduces the computational overhead of deep neural networks by
removing redundant sub-structures. However, assessing the relative importance of different …
removing redundant sub-structures. However, assessing the relative importance of different …
Edge-Cloud Collaborative UAV Object Detection: Edge-Embedded Lightweight Algorithm Design and Task Offloading Using Fuzzy Neural Network
Y Yuan, S Gao, Z Zhang, W Wang… - IEEE Transactions on …, 2024 - ieeexplore.ieee.org
With the rapid development of artificial intelligence and Unmanned Aerial Vehicle (UAV)
technology, AI-based UAVs are increasingly utilized in various industrial and civilian …
technology, AI-based UAVs are increasingly utilized in various industrial and civilian …