A-star: Test-time attention segregation and retention for text-to-image synthesis

A Agarwal, S Karanam, KJ Joseph… - Proceedings of the …, 2023 - openaccess.thecvf.com
Proceedings of the IEEE/CVF International Conference on …, 2023openaccess.thecvf.com
While recent developments in text-to-image generative models have led to a suite of high-
performing methods capable of producing creative imagery from free-form text, there are
several limitations. By analyzing the cross-attention representations of these models, we
notice two key issues. First, for text prompts that contain multiple concepts, there is a
significant amount of pixel-space overlap (ie, same spatial regions) among pairs of different
concepts. This eventually leads to the model being unable to distinguish between the two …
Abstract
While recent developments in text-to-image generative models have led to a suite of high-performing methods capable of producing creative imagery from free-form text, there are several limitations. By analyzing the cross-attention representations of these models, we notice two key issues. First, for text prompts that contain multiple concepts, there is a significant amount of pixel-space overlap (ie, same spatial regions) among pairs of different concepts. This eventually leads to the model being unable to distinguish between the two concepts and one of them being ignored in the final generation. Next, while these models attempt to capture all such concepts during the beginning of denoising (eg, first few steps) as evidenced by cross-attention maps, this knowledge is not retained by the end of denoising (eg, last few steps). Such loss of knowledge eventually leads to inaccurate generation outputs. To address these issues, our key innovations include two test-time attention-based loss functions that substantially improve the performance of pretrained baseline text-to-image diffusion models. First, our attention segregation loss reduces the cross-attention overlap between attention maps of different concepts in the text prompt, thereby reducing the confusion/conflict among various concepts and the eventual capture of all concepts in the generated output. Next, our attention retention loss explicitly forces text-to-image diffusion models to retain cross-attention information for all concepts across all denoising time steps, thereby leading to reduced information loss and the preservation of all concepts in the generated output. We conduct extensive experiments with the proposed loss functions on a variety of text prompts and demonstrate they lead to generated images that are significantly semantically closer to the input text when compared to baseline text-to-image diffusion models.
openaccess.thecvf.com
以上显示的是最相近的搜索结果。 查看全部搜索结果