With a Little Push, NLI Models can Robustly and Efficiently Predict Faithfulness

Julius Steen, Juri Opitz, Anette Frank, Katja Markert


Abstract
Conditional language models still generate unfaithful output that is not supported by their input. These unfaithful generations jeopardize trust in real-world applications such as summarization or human-machine interaction, motivating a need for automatic faithfulness metrics. To implement such metrics, NLI models seem attractive, since they solve a strongly related task that comes with a wealth of prior research and data. But recent research suggests that NLI models require costly additional machinery to perform reliably across datasets, e.g., by running inference on a cartesian product of input and generated sentences, or supporting them with a question-generation/answering step. In this work we show that pure NLI models _can_ outperform more complex metrics when combining task-adaptive data augmentation with robust inference procedures. We propose: (1) Augmenting NLI training data toadapt NL inferences to the specificities of faithfulness prediction in dialogue;(2) Making use of both entailment and contradiction probabilities in NLI, and(3) Using Monte-Carlo dropout during inference. Applied to the TRUE benchmark, which combines faithfulness datasets across diverse domains and tasks, our approach strongly improves a vanilla NLI model and significantly outperforms previous work, while showing favourable computational cost.
Anthology ID:
2023.acl-short.79
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
914–924
Language:
URL:
https://aclanthology.org/2023.acl-short.79
DOI:
10.18653/v1/2023.acl-short.79
Bibkey:
Cite (ACL):
Julius Steen, Juri Opitz, Anette Frank, and Katja Markert. 2023. With a Little Push, NLI Models can Robustly and Efficiently Predict Faithfulness. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 914–924, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
With a Little Push, NLI Models can Robustly and Efficiently Predict Faithfulness (Steen et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.acl-short.79.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2023.acl-short.79.mp4