Cross-lingual AMR Aligner: Paying Attention to Cross-Attention

Abelardo Carlos Martínez Lorenzo, Pere Lluís Huguet Cabot, Roberto Navigli


Abstract
This paper introduces a novel aligner for Abstract Meaning Representation (AMR) graphs that can scale cross-lingually, and is thus capable of aligning units and spans in sentences of different languages. Our approach leverages modern Transformer-based parsers, which inherently encode alignment information in their cross-attention weights, allowing us to extract this information during parsing. This eliminates the need for English-specific rules or the Expectation Maximization (EM) algorithm that have been used in previous approaches. In addition, we propose a guided supervised method using alignment to further enhance the performance of our aligner. We achieve state-of-the-art results in the benchmarks for AMR alignment and demonstrate our aligner’s ability to obtain them across multiple languages. Our code will be available at [https://www.github.com/babelscape/AMR-alignment](https://www.github.com/babelscape/AMR-alignment).
Anthology ID:
2023.findings-acl.109
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1726–1742
Language:
URL:
https://aclanthology.org/2023.findings-acl.109
DOI:
10.18653/v1/2023.findings-acl.109
Bibkey:
Cite (ACL):
Abelardo Carlos Martínez Lorenzo, Pere Lluís Huguet Cabot, and Roberto Navigli. 2023. Cross-lingual AMR Aligner: Paying Attention to Cross-Attention. In Findings of the Association for Computational Linguistics: ACL 2023, pages 1726–1742, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Cross-lingual AMR Aligner: Paying Attention to Cross-Attention (Martínez Lorenzo et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2023.findings-acl.109.pdf