Multi-concept customization of text-to-image diffusion

N Kumari, B Zhang, R Zhang… - Proceedings of the …, 2023 - openaccess.thecvf.com
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …

Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation

N Ruiz, Y Li, V Jampani, Y Pritch… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-
quality and diverse synthesis of images from a given text prompt. However, these models …

Svdiff: Compact parameter space for diffusion fine-tuning

L Han, Y Li, H Zhang, P Milanfar… - Proceedings of the …, 2023 - openaccess.thecvf.com
Recently, diffusion models have achieved remarkable success in text-to-image generation,
enabling the creation of high-quality images from text prompts and various conditions …

Ablating concepts in text-to-image diffusion models

N Kumari, B Zhang, SY Wang… - Proceedings of the …, 2023 - openaccess.thecvf.com
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …

Stylegan-nada: Clip-guided domain adaptation of image generators

R Gal, O Patashnik, H Maron, AH Bermano… - ACM Transactions on …, 2022 - dl.acm.org
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …

Surgical fine-tuning improves adaptation to distribution shifts

Y Lee, AS Chen, F Tajwar, A Kumar, H Yao… - arXiv preprint arXiv …, 2022 - arxiv.org
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …

Training generative adversarial networks with limited data

T Karras, M Aittala, J Hellsten, S Laine… - Advances in neural …, 2020 - proceedings.neurips.cc
Training generative adversarial networks (GAN) using too little data typically leads to
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …

A comprehensive survey on data-efficient GANs in image generation

Z Li, B Xia, J Zhang, C Wang, B Li - arXiv preprint arXiv:2204.08329, 2022 - arxiv.org
Generative Adversarial Networks (GANs) have achieved remarkable achievements in image
synthesis. These successes of GANs rely on large scale datasets, requiring too much cost …

Few-shot image generation via cross-domain correspondence

U Ojha, Y Li, J Lu, AA Efros, YJ Lee… - Proceedings of the …, 2021 - openaccess.thecvf.com
Training generative models, such as GANs, on a target domain containing limited examples
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …

Visual prompt tuning for generative transfer learning

K Sohn, H Chang, J Lezama… - Proceedings of the …, 2023 - openaccess.thecvf.com
Learning generative image models from various domains efficiently needs transferring
knowledge from an image synthesis model trained on a large dataset. We present a recipe …