Do LLMs Understand Dialogues? A Case Study on Dialogue Acts

Ayesha Qamar, Jonathan Tong, Ruihong Huang


Abstract
Recent advancements in NLP, largely driven by Large Language Models (LLMs), have significantly improved performance on an array of tasks. However, Dialogue Act (DA) classification remains challenging, particularly in the fine-grained 50-class, multiparty setting. This paper investigates the root causes of LLMs’ poor performance in DA classification through a linguistically motivated analysis. We identify three key pre-tasks essential for accurate DA prediction: Turn Management, Communicative Function Identification, and Dialogue Structure Prediction. Our experiments reveal that LLMs struggle with these fundamental tasks, often failing to outperform simple rule-based baselines. Additionally, we establish a strong empirical correlation between errors in these pre-tasks and DA classification failures. A human study further highlights the significant gap between LLM and human-level dialogue understanding. These findings indicate that LLMs’ shortcomings in dialogue comprehension hinder their ability to accurately predict DAs, highlighting the need for improved dialogue-aware training approaches.
Anthology ID:
2025.acl-long.1271
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26219–26237
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1271/
DOI:
Bibkey:
Cite (ACL):
Ayesha Qamar, Jonathan Tong, and Ruihong Huang. 2025. Do LLMs Understand Dialogues? A Case Study on Dialogue Acts. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 26219–26237, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Do LLMs Understand Dialogues? A Case Study on Dialogue Acts (Qamar et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.1271.pdf