Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation

Jiannan Xiang, Xin Wang, William Yang Wang


Abstract
Vision-and-Language Navigation (VLN) is a natural language grounding task where an agent learns to follow language instructions and navigate to specified destinations in real-world environments. A key challenge is to recognize and stop at the correct location, especially for complicated outdoor environments. Existing methods treat the STOP action equally as other actions, which results in undesirable behaviors that the agent often fails to stop at the destination even though it might be on the right path. Therefore, we propose Learning to Stop (L2Stop), a simple yet effective policy module that differentiates STOP and other actions. Our approach achieves the new state of the art on a challenging urban VLN dataset Touchdown, outperforming the baseline by 6.89% (absolute improvement) on Success weighted by Edit Distance (SED).
Anthology ID:
2020.findings-emnlp.62
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2020
Month:
November
Year:
2020
Address:
Online
Editors:
Trevor Cohn, Yulan He, Yang Liu
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
699–707
Language:
URL:
https://aclanthology.org/2020.findings-emnlp.62
DOI:
10.18653/v1/2020.findings-emnlp.62
Bibkey:
Cite (ACL):
Jiannan Xiang, Xin Wang, and William Yang Wang. 2020. Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 699–707, Online. Association for Computational Linguistics.
Cite (Informal):
Learning to Stop: A Simple yet Effective Approach to Urban Vision-Language Navigation (Xiang et al., Findings 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2020.findings-emnlp.62.pdf
Optional supplementary material:
 2020.findings-emnlp.62.OptionalSupplementaryMaterial.zip
Data
Touchdown Dataset