Towards LLM-powered Attentive Listener: A Pragmatic Approach through Quantity Self-Repair

Junlin Li, Peng Bo, Yu-Yin Hsu


Abstract
Grice’s Quantity Maxims dictate that human speakers aim for the optimal quantity of information during conversation. To empower LLMs to self-repair their responses toward optimal quantity and improve their attentive listening skills, we propose Q-Tuning and Q-Traveling, which draw on heuristic path-finding to enable decoder-only LLMs to travel among multiple “Q-alternatives” (Quantity Alternatives) and search for the optimal quantity in coordination with a conversation goal. Automatic and human evaluations demonstrate the effectiveness of Q-Tuning and Q-Traveling in constructing human-like, user-centered conversation agents.
Anthology ID:
2025.acl-short.1
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–13
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.acl-short.1/
DOI:
Bibkey:
Cite (ACL):
Junlin Li, Peng Bo, and Yu-Yin Hsu. 2025. Towards LLM-powered Attentive Listener: A Pragmatic Approach through Quantity Self-Repair. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 1–13, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Towards LLM-powered Attentive Listener: A Pragmatic Approach through Quantity Self-Repair (Li et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.acl-short.1.pdf