Entailment Progressions: A Robust Approach to Evaluating Reasoning Within Larger Discourse

Rishabh Shastry, Patricia Chiril, Joshua Charney, David Uminsky


Abstract
Textual entailment, or the ability to deduce whether a proposed hypothesis is logically supported by a given premise, has historically been applied to the evaluation of language modelling efficiency in tasks like question answering and text summarization. However, we hypothesize that these zero-shot entailment evaluations can be extended to the task of evaluating discourse within larger textual narratives. In this paper, we propose a simple but effective method that sequentially evaluates changes in textual entailment between sentences within a larger text, in an approach we denote as “Entailment Progressions”. These entailment progressions aim to capture the inference relations between sentences as an underlying component capable of distinguishing texts generated from various models and procedures. Our results suggest that entailment progressions can be used to effectively distinguish between machine-generated and human-authored texts across multiple established benchmark corpora and our own EP4MGT dataset. Additionally, our method displays robustness in performance when evaluated on paraphrased texts a technique that has historically affected the performance of well-established metrics when distinguishing between machine generated and human authored texts.
Anthology ID:
2025.nodalida-1.66
Volume:
Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025)
Month:
march
Year:
2025
Address:
Tallinn, Estonia
Editors:
Richard Johansson, Sara Stymne
Venue:
NoDaLiDa
SIG:
Publisher:
University of Tartu Library
Note:
Pages:
651–660
Language:
URL:
https://preview.aclanthology.org/moar-dois/2025.nodalida-1.66/
DOI:
Bibkey:
Cite (ACL):
Rishabh Shastry, Patricia Chiril, Joshua Charney, and David Uminsky. 2025. Entailment Progressions: A Robust Approach to Evaluating Reasoning Within Larger Discourse. In Proceedings of the Joint 25th Nordic Conference on Computational Linguistics and 11th Baltic Conference on Human Language Technologies (NoDaLiDa/Baltic-HLT 2025), pages 651–660, Tallinn, Estonia. University of Tartu Library.
Cite (Informal):
Entailment Progressions: A Robust Approach to Evaluating Reasoning Within Larger Discourse (Shastry et al., NoDaLiDa 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/moar-dois/2025.nodalida-1.66.pdf