AMR4NLI: Interpretable and robust NLI measures from semantic graphs

Juri Opitz, Shira Wein, Julius Steen, Anette Frank, Nathan Schneider


Abstract
The task of natural language inference (NLI) asks whether a given premise (expressed in NL) entails a given NL hypothesis. NLI benchmarks contain human ratings of entailment, but the meaning relationships driving these ratings are not formalized. Can the underlying sentence pair relationships be made more explicit in an interpretable yet robust fashion? We compare semantic structures to represent premise and hypothesis, including sets of *contextualized embeddings* and *semantic graphs* (Abstract Meaning Representations), and measure whether the hypothesis is a semantic substructure of the premise, utilizing interpretable metrics. Our evaluation on three English benchmarks finds value in both contextualized embeddings and semantic graphs; moreover, they provide complementary signals, and can be leveraged together in a hybrid model.
Anthology ID:
2023.iwcs-1.29
Volume:
Proceedings of the 15th International Conference on Computational Semantics
Month:
June
Year:
2023
Address:
Nancy, France
Editors:
Maxime Amblard, Ellen Breitholtz
Venue:
IWCS
SIG:
SIGSEM
Publisher:
Association for Computational Linguistics
Note:
Pages:
275–283
Language:
URL:
https://aclanthology.org/2023.iwcs-1.29
DOI:
Bibkey:
Cite (ACL):
Juri Opitz, Shira Wein, Julius Steen, Anette Frank, and Nathan Schneider. 2023. AMR4NLI: Interpretable and robust NLI measures from semantic graphs. In Proceedings of the 15th International Conference on Computational Semantics, pages 275–283, Nancy, France. Association for Computational Linguistics.
Cite (Informal):
AMR4NLI: Interpretable and robust NLI measures from semantic graphs (Opitz et al., IWCS 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.iwcs-1.29.pdf