Adversarial diffusion distillation

A Sauer, D Lorenz, A Blattmann… - arXiv preprint arXiv …, 2023 - arxiv.org
We introduce Adversarial Diffusion Distillation (ADD), a novel training approach that
efficiently samples large-scale foundational image diffusion models in just 1-4 steps while …

Score identity distillation: Exponentially fast distillation of pretrained diffusion models for one-step generation

M Zhou, H Zheng, Z Wang, M Yin… - Forty-first International …, 2024 - openreview.net
We introduce Score identity Distillation (SiD), an innovative data-free method that distills the
generative capabilities of pretrained diffusion models into a single-step generator. SiD not …

Your Student is Better Than Expected: Adaptive Teacher-Student Collaboration for Text-Conditional Diffusion Models

N Starodubcev, D Baranchuk… - Proceedings of the …, 2024 - openaccess.thecvf.com
Abstract Knowledge distillation methods have recently shown to be a promising direction to
speedup the synthesis of large-scale diffusion models by requiring only a few inference …

Score-based Diffusion Models via Stochastic Differential Equations--a Technical Tutorial

W Tang, H Zhao - arXiv preprint arXiv:2402.07487, 2024 - arxiv.org
This is an expository article on the score-based diffusion models, with a particular focus on
the formulation via stochastic differential equations (SDE). After a gentle introduction, we …

Towards a mathematical theory for consistency training in diffusion models

G Li, Z Huang, Y Wei - arXiv preprint arXiv:2402.07802, 2024 - arxiv.org
Consistency models, which were proposed to mitigate the high computational overhead
during the sampling phase of diffusion models, facilitate single-step sampling while attaining …

Exploring the frontiers of softmax: Provable optimization, applications in diffusion model, and beyond

J Gu, C Li, Y Liang, Z Shi, Z Song - arXiv preprint arXiv:2405.03251, 2024 - arxiv.org
The softmax activation function plays a crucial role in the success of large language models
(LLMs), particularly in the self-attention mechanism of the widely adopted Transformer …

CoDi: Conditional Diffusion Distillation for Higher-Fidelity and Faster Image Generation

K Mei, M Delbracio, H Talebi, Z Tu… - Proceedings of the …, 2024 - openaccess.thecvf.com
Large generative diffusion models have revolutionized text-to-image generation and offer
immense potential for conditional generation tasks such as image enhancement restoration …

Residual Learning in Diffusion Models

J Zhang, D Liu, E Park, S Zhang… - Proceedings of the IEEE …, 2024 - openaccess.thecvf.com
Diffusion models (DMs) have achieved remarkable generative performance particularly with
the introduction of stochastic differential equations (SDEs). Nevertheless a gap emerges in …

Hyper-sd: Trajectory segmented consistency model for efficient image synthesis

Y Ren, X Xia, Y Lu, J Zhang, J Wu, P Xie… - arXiv preprint arXiv …, 2024 - arxiv.org
Recently, a series of diffusion-aware distillation algorithms have emerged to alleviate the
computational overhead associated with the multi-step inference process of Diffusion …

Distilling Diffusion Models into Conditional GANs

M Kang, R Zhang, C Barnes, S Paris, S Kwak… - arXiv preprint arXiv …, 2024 - arxiv.org
We propose a method to distill a complex multistep diffusion model into a single-step
conditional GAN student model, dramatically accelerating inference, while preserving image …