Elisa Passone
2025
Training Multi-Modal LLMs through Dialogue Planning for HRI
Claudiu Daniel Hromei
|
Federico Borazio
|
Andrea Sensi
|
Elisa Passone
|
Danilo Croce
|
Roberto Basili
Findings of the Association for Computational Linguistics: ACL 2025
Grounded natural language understanding in Human-Robot Interaction (HRI) requires integrating linguistic, visual, and world knowledge to ensure effective task execution. We propose an approach that enhances Multi-Modal Large Language Models (MLLMs) with a novel explicit dialogue planning phase, allowing robotic agents to systematically refine their understanding of ambiguous commands through structured clarification steps. This reduces hallucinations and improves task feasibility.To evaluate this approach, we introduce a novel dataset of over 1,100 annotated dialogues in English and Italian, designed for fine-tuning and assessing Multi-Modal models in HRI scenarios. Experimental results show that dialogue planning improves response accuracy and quality, and contributes to cross-lingual generalisation, enabling models trained in one language to transfer effectively to another. To the best of our knowledge, this is the first application of structured, goal-driven, and explicit dialogue planning in Multi-Modal LLMs for grounded interaction.
2024
Leveraging Large Language Models for Fact Verification in Italian
Antonio Scaiella
|
Stefano Costanzo
|
Elisa Passone
|
Danilo Croce
|
Giorgio Gambosi
Proceedings of the 10th Italian Conference on Computational Linguistics (CLiC-it 2024)
In recent years, Automatic Fact Checking has become a crucial tool in combating fake news, leveraging AI to verify the accuracy of information. Despite significant advancements, most datasets and models are predominantly available in English, posing challenges for other languages. This paper presents an Italian resource based on the dataset made available in the FEVER evaluation campaign, created to train and evaluate fact-checking models in Italian. The dataset comprises approximately 240k examples, with over 2k test examples manually validated. Additionally, we fine-tuned a state-of-the-art LLM, namely LLaMA3, on both the original English and translated Italian datasets, demonstrating that fine-tuning significantly improves model performance. Our results suggest that the fine-tuned models achieve comparable accuracy in both languages, highlighting the value of the proposed resource.
Search
Fix author
Co-authors
- Danilo Croce 2
- Roberto Basili 1
- Federico Borazio 1
- Stefano Costanzo 1
- Giorgio Gambosi 1
- show all...