Interpreting CLIP's Image Representation via Text-Based Decomposition

Y Gandelsman, AA Efros, J Steinhardt - arXiv preprint arXiv:2310.05916, 2023 - arxiv.org
arXiv preprint arXiv:2310.05916, 2023arxiv.org
We investigate the CLIP image encoder by analyzing how individual model components
affect the final representation. We decompose the image representation as a sum across
individual image patches, model layers, and attention heads, and use CLIP's text
representation to interpret the summands. Interpreting the attention heads, we characterize
each head's role by automatically finding text representations that span its output space,
which reveals property-specific roles for many heads (eg location or shape). Next …
We investigate the CLIP image encoder by analyzing how individual model components affect the final representation. We decompose the image representation as a sum across individual image patches, model layers, and attention heads, and use CLIP's text representation to interpret the summands. Interpreting the attention heads, we characterize each head's role by automatically finding text representations that span its output space, which reveals property-specific roles for many heads (e.g. location or shape). Next, interpreting the image patches, we uncover an emergent spatial localization within CLIP. Finally, we use this understanding to remove spurious features from CLIP and to create a strong zero-shot image segmenter. Our results indicate that a scalable understanding of transformer models is attainable and can be used to repair and improve models.
arxiv.org
以上显示的是最相近的搜索结果。 查看全部搜索结果