Evaluate AMR Graph Similarity via Self-supervised Learning

Ziyi Shou, Fangzhen Lin


Abstract
In work on AMR (Abstract Meaning Representation), similarity metrics are crucial as they are used to evaluate AMR systems such as AMR parsers. Current AMR metrics are all based on nodes or triples matching without considering the entire structures of AMR graphs. To address this problem, and inspired by learned similarity evaluation on plain text, we propose AMRSim, an automatic AMR graph similarity evaluation metric. To overcome the high cost of collecting human-annotated data, AMRSim automatically generates silver AMR graphs and utilizes self-supervised learning methods. We evaluated AMRSim on various datasets and found that AMRSim significantly improves the correlations with human semantic scores and remains robust under diverse challenges. We also discuss how AMRSim can be extended to multilingual cases.
Anthology ID:
2023.acl-long.892
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
16112–16123
Language:
URL:
https://aclanthology.org/2023.acl-long.892
DOI:
10.18653/v1/2023.acl-long.892
Bibkey:
Cite (ACL):
Ziyi Shou and Fangzhen Lin. 2023. Evaluate AMR Graph Similarity via Self-supervised Learning. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 16112–16123, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Evaluate AMR Graph Similarity via Self-supervised Learning (Shou & Lin, ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-5/2023.acl-long.892.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-5/2023.acl-long.892.mp4