Abstract
People often answer yes-no questions without explicitly saying yes, no, or similar polar key-words. Figuring out the meaning of indirectanswers is challenging, even for large language models. In this paper, we investigate this problem working with dialogues from multiple domains. We present new benchmarks in three diverse domains: movie scripts, tennis interviews, and airline customer service. We present an approach grounded on distant supervision and blended training to quickly adapt to a new dialogue domain. Experimental results show that our approach is never detrimental and yields F1 improvements as high as 11-34%.- Anthology ID:
- 2024.findings-naacl.136
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2024
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2111–2128
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.136/
- DOI:
- 10.18653/v1/2024.findings-naacl.136
- Cite (ACL):
- Zijie Wang, Farzana Rashid, and Eduardo Blanco. 2024. Interpreting Answers to Yes-No Questions in Dialogues from Multiple Domains. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 2111–2128, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Interpreting Answers to Yes-No Questions in Dialogues from Multiple Domains (Wang et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.findings-naacl.136.pdf