Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction
Abstract
We present Claim-Dissector: a novel latent variable model for fact-checking and analysis, which given a claim and a set of retrieved evidence jointly learns to identify: (i) the relevant evidences to the given claim (ii) the veracity of the claim. We propose to disentangle the per-evidence relevance probability and its contribution to the final veracity probability in an interpretable way — the final veracity probability is proportional to a linear ensemble of per-evidence relevance probabilities. In this way, the individual contributions of evidences towards the final predicted probability can be identified. In per-evidence relevance probability, our model can further distinguish whether each relevant evidence is supporting (S) or refuting (R) the claim. This allows to quantify how much the S/R probability contributes to final verdict or to detect disagreeing evidence. Despite its interpretable nature, our system achieves results competetive with state-of-the-art on the FEVER dataset, as compared to typical two-stage system pipelines, while using significantly fewer parameters. Furthermore, our analysis shows that our model can learn fine-grained relevance cues while using coarse-grained supervision and we demonstrate it in 2 ways. (i) We show that our model can achieve competitive sentence recall while using only paragraph-level relevance supervision. (ii) Traversing towards the finest granularity of relevance, we show that our model is capable of identifying relevance at the token level. To do this, we present a new benchmark TLR-FEVER focusing on token-level interpretability — humans annotate tokens in relevant evidences they considered essential when making their judgment. Then we measure how similar are these annotations to the tokens our model is focusing on.- Anthology ID:
- 2023.findings-acl.647
- Volume:
- Findings of the Association for Computational Linguistics: ACL 2023
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Editors:
- Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 10184–10205
- Language:
- URL:
- https://aclanthology.org/2023.findings-acl.647
- DOI:
- 10.18653/v1/2023.findings-acl.647
- Cite (ACL):
- Martin Fajcik, Petr Motlicek, and Pavel Smrz. 2023. Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10184–10205, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Claim-Dissector: An Interpretable Fact-Checking System with Joint Re-ranking and Veracity Prediction (Fajcik et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/ingest-acl-2023-videos/2023.findings-acl.647.pdf