Vision-based holistic scene understanding towards proactive human–robot collaboration
Recently human–robot collaboration (HRC) has emerged as a promising paradigm for mass
personalization in manufacturing owing to the potential to fully exploit the strength of human …
personalization in manufacturing owing to the potential to fully exploit the strength of human …
Multimodal research in vision and language: A review of current and emerging trends
Deep Learning and its applications have cascaded impactful research and development
with a diverse range of modalities present in the real-world data. More recently, this has …
with a diverse range of modalities present in the real-world data. More recently, this has …
How much can clip benefit vision-and-language tasks?
Most existing Vision-and-Language (V&L) models rely on pre-trained visual encoders, using
a relatively small set of manually-annotated data (as compared to web-crawled data), to …
a relatively small set of manually-annotated data (as compared to web-crawled data), to …
History aware multimodal transformer for vision-and-language navigation
Vision-and-language navigation (VLN) aims to build autonomous visual agents that follow
instructions and navigate in real scenes. To remember previously visited locations and …
instructions and navigate in real scenes. To remember previously visited locations and …
Think global, act local: Dual-scale graph transformer for vision-and-language navigation
Following language instructions to navigate in unseen environments is a challenging
problem for autonomous embodied agents. The agent not only needs to ground languages …
problem for autonomous embodied agents. The agent not only needs to ground languages …
Navgpt: Explicit reasoning in vision-and-language navigation with large language models
Trained with an unprecedented scale of data, large language models (LLMs) like ChatGPT
and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such …
and GPT-4 exhibit the emergence of significant reasoning abilities from model scaling. Such …
Vln bert: A recurrent vision-and-language bert for navigation
Accuracy of many visiolinguistic tasks has benefited significantly from the application of
vision-and-language (V&L) BERT. However, its application for the task of vision-and …
vision-and-language (V&L) BERT. However, its application for the task of vision-and …
Room-across-room: Multilingual vision-and-language navigation with dense spatiotemporal grounding
We introduce Room-Across-Room (RxR), a new Vision-and-Language Navigation (VLN)
dataset. RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and …
dataset. RxR is multilingual (English, Hindi, and Telugu) and larger (more paths and …
Airbert: In-domain pretraining for vision-and-language navigation
Vision-and-language navigation (VLN) aims to enable embodied agents to navigate in
realistic environments using natural language instructions. Given the scarcity of domain …
realistic environments using natural language instructions. Given the scarcity of domain …
Vision-and-language navigation: A survey of tasks, methods, and future directions
A long-term goal of AI research is to build intelligent agents that can communicate with
humans in natural language, perceive the environment, and perform real-world tasks. Vision …
humans in natural language, perceive the environment, and perform real-world tasks. Vision …