Jianwei Yang
2023
LACMA: Language-Aligning Contrastive Learning with Meta-Actions for Embodied Instruction Following
Cheng-Fu Yang
|
Yen-Chun Chen
|
Jianwei Yang
|
Xiyang Dai
|
Lu Yuan
|
Yu-Chiang Wang
|
Kai-Wei Chang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
End-to-end Transformers have demonstrated an impressive success rate for Embodied Instruction Following when the environment has been seen in training. However, they tend to struggle when deployed in an unseen environment. This lack of generalizability is due to the agent’s insensitivity to subtle changes in natural language instructions. To mitigate this issue, we propose explicitly aligning the agent’s hidden states with the instructions via contrastive learning. Nevertheless, the semantic gap between high-level language instructions and the agent’s low-level action space remains an obstacle. Therefore, we further introduce a novel concept of meta-actions to bridge the gap. Meta-actions are ubiquitous action patterns that can be parsed from the original action sequence. These patterns represent higher-level semantics that are intuitively aligned closer to the instructions. When meta-actions are applied as additional training signals, the agent generalizes better to unseen environments. Compared to a strong multi-modal Transformer baseline, we achieve a significant 4.5% absolute gain in success rate in unseen environments of ALFRED Embodied Instruction Following. Additional analysis shows that the contrastive objective and meta-actions are complementary in achieving the best results, and the resulting agent better aligns its states with corresponding instructions, making it more suitable for real-world embodied agents.
Search
Co-authors
- Cheng-Fu Yang 1
- Yen-Chun Chen 1
- Xiyang Dai 1
- Lu Yuan 1
- Yu-Chiang Wang 1
- show all...