Abstract
Fact verification systems typically rely on neural network classifiers for veracity prediction, which lack explainability. This paper proposes ProoFVer, which uses a seq2seq model to generate natural logic-based inferences as proofs. These proofs consist of lexical mutations between spans in the claim and the evidence retrieved, each marked with a natural logic operator. Claim veracity is determined solely based on the sequence of these operators. Hence, these proofs are faithful explanations, and this makes ProoFVer faithful by construction. Currently, ProoFVer has the highest label accuracy and the second best score in the FEVER leaderboard. Furthermore, it improves by 13.21% points over the next best model on a dataset with counterfactual instances, demonstrating its robustness. As explanations, the proofs show better overlap with human rationales than attention-based highlights and the proofs help humans predict model decisions correctly more often than using the evidence directly.1- Anthology ID:
- 2022.tacl-1.59
- Volume:
- Transactions of the Association for Computational Linguistics, Volume 10
- Month:
- Year:
- 2022
- Address:
- Cambridge, MA
- Editors:
- Brian Roark, Ani Nenkova
- Venue:
- TACL
- SIG:
- Publisher:
- MIT Press
- Note:
- Pages:
- 1013–1030
- Language:
- URL:
- https://aclanthology.org/2022.tacl-1.59
- DOI:
- 10.1162/tacl_a_00503
- Cite (ACL):
- Amrith Krishna, Sebastian Riedel, and Andreas Vlachos. 2022. ProoFVer: Natural Logic Theorem Proving for Fact Verification. Transactions of the Association for Computational Linguistics, 10:1013–1030.
- Cite (Informal):
- ProoFVer: Natural Logic Theorem Proving for Fact Verification (Krishna et al., TACL 2022)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2022.tacl-1.59.pdf