Styleadapter: A single-pass lora-free model for stylized image generation

Z Wang, X Wang, L Xie, Z Qi, Y Shan, W Wang… - arXiv preprint arXiv …, 2023 - arxiv.org
arXiv preprint arXiv:2309.01770, 2023arxiv.org
This paper presents a LoRA-free method for stylized image generation that takes a text
prompt and style reference images as inputs and produces an output image in a single pass.
Unlike existing methods that rely on training a separate LoRA for each style, our method can
adapt to various styles with a unified model. However, this poses two challenges: 1) the
prompt loses controllability over the generated content, and 2) the output image inherits both
the semantic and style features of the style reference image, compromising its content …
This paper presents a LoRA-free method for stylized image generation that takes a text prompt and style reference images as inputs and produces an output image in a single pass. Unlike existing methods that rely on training a separate LoRA for each style, our method can adapt to various styles with a unified model. However, this poses two challenges: 1) the prompt loses controllability over the generated content, and 2) the output image inherits both the semantic and style features of the style reference image, compromising its content fidelity. To address these challenges, we introduce StyleAdapter, a model that comprises two components: a two-path cross-attention module (TPCA) and three decoupling strategies. These components enable our model to process the prompt and style reference features separately and reduce the strong coupling between the semantic and style information in the style references. StyleAdapter can generate high-quality images that match the content of the prompts and adopt the style of the references (even for unseen styles) in a single pass, which is more flexible and efficient than previous methods. Experiments have been conducted to demonstrate the superiority of our method over previous works.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果