Blip-diffusion: Pre-trained subject representation for controllable text-to-image generation and editing

D Li, J Li, S Hoi - Advances in Neural Information …, 2024 - proceedings.neurips.cc
Subject-driven text-to-image generation models create novel renditions of an input subject
based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties …

BLIP-diffusion: pre-trained subject representation for controllable text-to-image generation and editing

D Li, J Li, SCH Hoi - Proceedings of the 37th International Conference on …, 2023 - dl.acm.org
Subject-driven text-to-image generation models create novel renditions of an input subject
based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties …

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

D Li, J Li, SCH Hoi - arXiv preprint arXiv:2305.14720, 2023 - arxiv.org
Subject-driven text-to-image generation models create novel renditions of an input subject
based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties …

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

D Li, J Li, SCH Hoi - arXiv e-prints, 2023 - ui.adsabs.harvard.edu
Subject-driven text-to-image generation models create novel renditions of an input subject
based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties …

BLIP-Diffusion: Pre-trained Subject Representation for Controllable Text-to-Image Generation and Editing

D Li, J Li, S Hoi - Thirty-seventh Conference on Neural Information … - openreview.net
Subject-driven text-to-image generation models create novel renditions of an input subject
based on text prompts. Existing models suffer from lengthy fine-tuning and difficulties …