Amit Roy-Chowdhury
2025
HEAL: An Empirical Study on Hallucinations in Embodied Agents Driven by Large Language Models
Trishna Chakraborty
|
Udita Ghosh
|
Xiaopan Zhang
|
Fahim Faisal Niloy
|
Yue Dong
|
Jiachen Li
|
Amit Roy-Chowdhury
|
Chengyu Song
Findings of the Association for Computational Linguistics: EMNLP 2025
Large language models (LLMs) are increasingly being adopted as the cognitive core of embodied agents. However, inherited hallucinations, which stem from failures to ground user instructions in the observed physical environment, can lead to navigation errors, such as searching for a refrigerator that does not exist. In this paper, we present the first systematic study of hallucinations in LLM-based embodied agents performing long-horizon tasks under scene–task inconsistencies. Our goal is to understand to what extent hallucinations occur, what types of inconsistencies trigger them, and how current models respond. To achieve these goals, we construct a hallucination probing set by building on an existing benchmark, capable of inducing hallucination rates up to 40× higher than base prompts. Evaluating 12 models across two simulation environments, we find that while models exhibit reasoning, they fail to resolve scene-task inconsistencies — highlighting fundamental limitations in handling infeasible tasks. We also provide actionable insights on ideal model behavior for each scenario, offering guidance for developing more robust and reliable planning strategies.
2024
Can Textual Unlearning Solve Cross-Modality Safety Alignment?
Trishna Chakraborty
|
Erfan Shayegani
|
Zikui Cai
|
Nael B. Abu-Ghazaleh
|
M. Salman Asif
|
Yue Dong
|
Amit Roy-Chowdhury
|
Chengyu Song
Findings of the Association for Computational Linguistics: EMNLP 2024
Recent studies reveal that integrating new modalities into large language models (LLMs), such as vision-language models (VLMs), creates a new attack surface that bypasses existing safety training techniques like supervised fine-tuning (SFT) and reinforcement learning with human feedback (RLHF). While further SFT and RLHF-based safety training can be conducted in multi-modal settings, collecting multi-modal training datasets poses a significant challenge. Inspired by the structural design of recent multi-modal models, where all input modalities are ultimately fused into the language space, we explore whether unlearning solely in the textual domain can be effective for cross-modality safety alignment. Our empirical evaluation across seven datasets demonstrates promising transferability — textual unlearning in VLMs significantly reduces the Attack Success Rate (ASR) to less than 8% and in some cases, even as low as nearly 2% for both text-based and vision-text-based attacks, alongside preserving the utility. Moreover, our experiments show that unlearning with a multi-modal dataset offers no potential benefits but incurs significantly increased computational demands.
Search
Fix author
Co-authors
- Trishna Chakraborty 2
- Yue Dong 2
- Chengyu Song 2
- Nael B. Abu-Ghazaleh 1
- M. Salman Asif 1
- show all...