@inproceedings{jayaweera-etal-2024-amrex,
    title = "{AMRE}x: {AMR} for Explainable Fact Verification",
    author = "Jayaweera, Chathuri  and
      Youm, Sangpil  and
      Dorr, Bonnie J",
    editor = "Schlichtkrull, Michael  and
      Chen, Yulong  and
      Whitehouse, Chenxi  and
      Deng, Zhenyun  and
      Akhtar, Mubashara  and
      Aly, Rami  and
      Guo, Zhijiang  and
      Christodoulopoulos, Christos  and
      Cocarascu, Oana  and
      Mittal, Arpit  and
      Thorne, James  and
      Vlachos, Andreas",
    booktitle = "Proceedings of the Seventh Fact Extraction and VERification Workshop (FEVER)",
    month = nov,
    year = "2024",
    address = "Miami, Florida, USA",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.fever-1.26/",
    doi = "10.18653/v1/2024.fever-1.26",
    pages = "234--244",
    abstract = "With the advent of social media networks and the vast amount of information circulating through them, automatic fact verification is an essential component to prevent the spread of misinformation. It is even more useful to have fact verification systems that provide explanations along with their classifications to ensure accurate predictions. To address both of these requirements, we implement AMREx, an Abstract Meaning Representation (AMR)-based veracity prediction and explanation system for fact verification using a combination of Smatch, an AMR evaluation metric to measure meaning containment and textual similarity, and demonstrate its effectiveness in producing partially explainable justifications using two community standard fact verification datasets, FEVER and AVeriTeC. AMREx surpasses the AVeriTec baseline accuracy showing the effectiveness of our approach for real-world claim verification. It follows an interpretable pipeline and returns an explainable AMR node mapping to clarify the system{'}s veracity predictions when applicable. We further demonstrate that AMREx output can be used to prompt LLMs to generate natural-language explanations using the AMR mappings as a guide to lessen the probability of hallucinations."
}Markdown (Informal)
[AMREx: AMR for Explainable Fact Verification](https://preview.aclanthology.org/ingest-emnlp/2024.fever-1.26/) (Jayaweera et al., FEVER 2024)
ACL
- Chathuri Jayaweera, Sangpil Youm, and Bonnie J Dorr. 2024. AMREx: AMR for Explainable Fact Verification. In Proceedings of the Seventh Fact Extraction and VERification Workshop (FEVER), pages 234–244, Miami, Florida, USA. Association for Computational Linguistics.