PAM: Paraphrase AMR-Centric Evaluation Metric

Afonso Sousa, Henrique Lopes Cardoso


Abstract
Paraphrasing is rooted in semantics, which makes evaluating paraphrase generation systems hard. Current paraphrase generators are typically evaluated using borrowed metrics from adjacent text-to-text tasks, like machine translation or text summarization. These metrics tend to have ties to the surface form of the reference text. This is not ideal for paraphrases as we typically want variation in the lexicon while persisting semantics. To address this problem, and inspired by learned similarity evaluation on plain text, we propose PAM, a Paraphrase AMR-Centric Evaluation Metric. This metric uses AMR graphs extracted from the input text, which consist of semantic structures agnostic to the text surface form, making the resulting evaluation metric more robust to variations in syntax or lexicon. Additionally, we evaluated PAM on different semantic textual similarity datasets and found that it improves the correlations with human semantic scores when compared to other AMR-based metrics.
Anthology ID:
2025.findings-acl.879
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17106–17121
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.879/
DOI:
Bibkey:
Cite (ACL):
Afonso Sousa and Henrique Lopes Cardoso. 2025. PAM: Paraphrase AMR-Centric Evaluation Metric. In Findings of the Association for Computational Linguistics: ACL 2025, pages 17106–17121, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
PAM: Paraphrase AMR-Centric Evaluation Metric (Sousa & Lopes Cardoso, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.879.pdf