Automated Fact-Checking in Dialogue: Are Specialized Models Needed?

Eric Chamoun, Marzieh Saeidi, Andreas Vlachos


Abstract
Prior research has shown that typical fact-checking models for stand-alone claims struggle with claims made in conversation. As a solution, fine-tuning these models on dialogue data has been proposed. However, creating separate models for each use case is impractical, and we show that fine-tuning models for dialogue results in poor performance on typical fact-checking. To overcome this challenge, we present techniques that allow us to use the same models for both dialogue and typical fact-checking. These mainly focus on retrieval adaptation and transforming conversational inputs so that they can be accurately processed by models trained on stand-alone claims. We demonstrate that a typical fact-checking model incorporating these techniques is competitive with state-of-the-art models for dialogue, while maintaining its performance on stand-alone claims.
Anthology ID:
2023.emnlp-main.993
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16009–16020
Language:
URL:
https://aclanthology.org/2023.emnlp-main.993
DOI:
10.18653/v1/2023.emnlp-main.993
Bibkey:
Cite (ACL):
Eric Chamoun, Marzieh Saeidi, and Andreas Vlachos. 2023. Automated Fact-Checking in Dialogue: Are Specialized Models Needed?. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 16009–16020, Singapore. Association for Computational Linguistics.
Cite (Informal):
Automated Fact-Checking in Dialogue: Are Specialized Models Needed? (Chamoun et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.993.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2023.emnlp-main.993.mp4