Multi-concept customization of text-to-image diffusion
While generative models produce high-quality images of concepts learned from a large-
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
scale database, a user often wishes to synthesize instantiations of their own concepts (for …
Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation
Large text-to-image models achieved a remarkable leap in the evolution of AI, enabling high-
quality and diverse synthesis of images from a given text prompt. However, these models …
quality and diverse synthesis of images from a given text prompt. However, these models …
Svdiff: Compact parameter space for diffusion fine-tuning
Recently, diffusion models have achieved remarkable success in text-to-image generation,
enabling the creation of high-quality images from text prompts and various conditions …
enabling the creation of high-quality images from text prompts and various conditions …
Ablating concepts in text-to-image diffusion models
Large-scale text-to-image diffusion models can generate high-fidelity images with powerful
compositional ability. However, these models are typically trained on an enormous amount …
compositional ability. However, these models are typically trained on an enormous amount …
Stylegan-nada: Clip-guided domain adaptation of image generators
Can a generative model be trained to produce images from a specific domain, guided only
by a text prompt, without seeing any image? In other words: can an image generator be …
by a text prompt, without seeing any image? In other words: can an image generator be …
Surgical fine-tuning improves adaptation to distribution shifts
A common approach to transfer learning under distribution shift is to fine-tune the last few
layers of a pre-trained model, preserving learned features while also adapting to the new …
layers of a pre-trained model, preserving learned features while also adapting to the new …
Training generative adversarial networks with limited data
Training generative adversarial networks (GAN) using too little data typically leads to
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …
discriminator overfitting, causing training to diverge. We propose an adaptive discriminator …
A comprehensive survey on data-efficient GANs in image generation
Generative Adversarial Networks (GANs) have achieved remarkable achievements in image
synthesis. These successes of GANs rely on large scale datasets, requiring too much cost …
synthesis. These successes of GANs rely on large scale datasets, requiring too much cost …
Few-shot image generation via cross-domain correspondence
Training generative models, such as GANs, on a target domain containing limited examples
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …
(eg, 10) can easily result in overfitting. In this work, we seek to utilize a large source domain …
Visual prompt tuning for generative transfer learning
Learning generative image models from various domains efficiently needs transferring
knowledge from an image synthesis model trained on a large dataset. We present a recipe …
knowledge from an image synthesis model trained on a large dataset. We present a recipe …