Zun Wang
2025
Vision-and-Language Navigation with Analogical Textual Descriptions in LLMs
Yue Zhang
|
Tianyi Ma
|
Zun Wang
|
Yanyuan Qiao
|
Parisa Kordjamshidi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Integrating large language models (LLMs) into embodied AI models is becoming increasingly prevalent. However, existing zero-shot LLM-based Vision-and-Language Navigation (VLN) agents either encode images as textual scene descriptions, potentially oversimplifying visual details, or process raw image inputs, which can fail to capture abstract semantics required for high-level reasoning. In this paper, we improve the navigation agent’s contextual understanding by incorporating textual descriptions that facilitate analogical reasoning across images from multiple perspectives. By leveraging text-based analogical reasoning, the agent enhances its global scene understanding and spatial reasoning, leading to more accurate action decisions. We evaluate our approach on the R2R dataset, where our experiments demonstrate significant improvements in navigation performance.