Teaching structured vision & language concepts to vision & language models

S Doveh, A Arbelle, S Harary… - Proceedings of the …, 2023 - openaccess.thecvf.com
Vision and Language (VL) models have demonstrated remarkable zero-shot performance in
a variety of tasks. However, some aspects of complex language understanding still remain a …

Mural: multimodal, multitask retrieval across languages

A Jain, M Guo, K Srinivasan, T Chen… - arXiv preprint arXiv …, 2021 - arxiv.org
Both image-caption pairs and translation pairs provide the means to learn deep
representations of and connections between languages. We use both types of pairs in …

Lightningdot: Pre-training visual-semantic embeddings for real-time image-text retrieval

S Sun, YC Chen, L Li, S Wang, Y Fang… - Proceedings of the 2021 …, 2021 - aclanthology.org
Multimodal pre-training has propelled great advancement in vision-and-language research.
These large-scale pre-trained models, although successful, fatefully suffer from slow …

M3p: Learning universal representations via multitask multilingual multimodal pre-training

M Ni, H Huang, L Su, E Cui, T Bharti… - Proceedings of the …, 2021 - openaccess.thecvf.com
We present M3P, a Multitask Multilingual Multimodal Pre-trained model that combines
multilingual pre-training and multimodal pre-training into a unified framework via multitask …

Retrieve fast, rerank smart: Cooperative and joint approaches for improved cross-modal retrieval

G Geigle, J Pfeiffer, N Reimers, I Vulić… - Transactions of the …, 2022 - direct.mit.edu
Current state-of-the-art approaches to cross-modal retrieval process text and visual input
jointly, relying on Transformer-based architectures with cross-attention mechanisms that …

Multilingual multimodal pre-training for zero-shot cross-lingual transfer of vision-language models

PY Huang, M Patrick, J Hu, G Neubig, F Metze… - arXiv preprint arXiv …, 2021 - arxiv.org
This paper studies zero-shot cross-lingual transfer of vision-language models. Specifically,
we focus on multilingual text-to-video search and propose a Transformer-based model that …

Cross-lingual cross-modal retrieval with noise-robust learning

Y Wang, J Dong, T Liang, M Zhang, R Cai… - Proceedings of the 30th …, 2022 - dl.acm.org
Despite the recent developments in the field of cross-modal retrieval, there has been less
research focusing on low-resource languages due to the lack of manually annotated …

Text to image generation: Leaving no language behind

P Reviriego, E Merino-Gómez - arXiv preprint arXiv:2208.09333, 2022 - arxiv.org
One of the latest applications of Artificial Intelligence (AI) is to generate images from natural
language descriptions. These generators are now becoming available and achieve …

Assessing multilingual fairness in pre-trained multimodal representations

J Wang, Y Liu, XE Wang - arXiv preprint arXiv:2106.06683, 2021 - arxiv.org
Recently pre-trained multimodal models, such as CLIP, have shown exceptional capabilities
towards connecting images and natural language. The textual representations in English …

Cross-lingual cross-modal consolidation for effective multilingual video corpus moment retrieval

J Liu, T Yu, H Peng, M Sun, P Li - Findings of the Association for …, 2022 - aclanthology.org
Existing multilingual video corpus moment retrieval (mVCMR) methods are mainly based on
a two-stream structure. The visual stream utilizes the visual content in the video to estimate …