CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations

Jialu Li, Hao Tan, Mohit Bansal


Abstract
Vision-and-Language Navigation (VLN) tasks require an agent to navigate through the environment based on language instructions. In this paper, we aim to solve two key challenges in this task: utilizing multilingual instructions for improved instruction-path grounding and navigating through new environments that are unseen during training. To address these challenges, first, our agent learns a shared and visually-aligned cross-lingual language representation for the three languages (English, Hindi and Telugu) in the Room-Across-Room dataset. Our language representation learning is guided by text pairs that are aligned by visual information. Second, our agent learns an environment-agnostic visual representation by maximizing the similarity between semantically-aligned image pairs (with constraints on object-matching) from different environments. Our environment agnostic visual representation can mitigate the environment bias induced by low-level visual information. Empirically, on the Room-Across-Room dataset, we show that our multi-lingual agent gets large improvements in all metrics over the strong baseline model when generalizing to unseen environments with the cross-lingual language representation and the environment-agnostic visual representation. Furthermore, we show that our learned language and visual representations can be successfully transferred to the Room-to-Room and Cooperative Vision-and-Dialogue Navigation task, and present detailed qualitative and quantitative generalization and grounding analysis.
Anthology ID:
2022.findings-naacl.48
Volume:
Findings of the Association for Computational Linguistics: NAACL 2022
Month:
July
Year:
2022
Address:
Seattle, United States
Editors:
Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
633–649
Language:
URL:
https://aclanthology.org/2022.findings-naacl.48
DOI:
10.18653/v1/2022.findings-naacl.48
Bibkey:
Cite (ACL):
Jialu Li, Hao Tan, and Mohit Bansal. 2022. CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 633–649, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
CLEAR: Improving Vision-Language Navigation with Cross-Lingual, Environment-Agnostic Representations (Li et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.48.pdf
Software:
 2022.findings-naacl.48.software.zip
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2022.findings-naacl.48.mp4
Code
 jialuli-luka/clear
Data
RxR