MoA: Mixture-of-Attention for Subject-Context Disentanglement in Personalized Image Generation

D Ostashev, Y Fang, S Tulyakov, K Aberman - arXiv preprint arXiv …, 2024 - arxiv.org
We introduce a new architecture for personalization of text-to-image diffusion models,
coined Mixture-of-Attention (MoA). Inspired by the Mixture-of-Experts mechanism utilized in …

PuLID: Pure and Lightning ID Customization via Contrastive Alignment

Z Guo, Y Wu, Z Chen, L Chen, Q He - arXiv preprint arXiv:2404.16022, 2024 - arxiv.org
We propose Pure and Lightning ID customization (PuLID), a novel tuning-free ID
customization method for text-to-image generation. By incorporating a Lightning T2I branch …

TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models

G Deutch, R Gal, D Garibi, O Patashnik… - arXiv preprint arXiv …, 2024 - arxiv.org
Diffusion models have opened the path to a wide range of text-based image editing
frameworks. However, these typically build on the multi-step nature of the diffusion …

RectifID: Personalizing Rectified Flow with Anchored Classifier Guidance

Z Sun, Z Yang, Y Jin, H Chi, K Xu, L Chen… - arXiv preprint arXiv …, 2024 - arxiv.org
Customizing diffusion models to generate identity-preserving images from user-provided
reference images is an intriguing new problem. The prevalent approaches typically require …

RepControlNet: ControlNet Reparameterization

Z Deng, K Zhou, F Wang, Z Mi - arXiv preprint arXiv:2408.09240, 2024 - arxiv.org
With the wide application of diffusion model, the high cost of inference resources has
became an important bottleneck for its universal application. Controllable generation, such …