Modal Dependency Parsing via Biaffine Attention with Self-Loop

Jayeol Chun, Nianwen Xue


Abstract
A modal dependency structure represents a web of connections between events and sources of information in a document that allows for tracing of who-said-what with what levels of certainty, thereby establishing factuality in an event-centric approach. Obtaining such graphs defines the task of modal dependency parsing, which involves event and source identification along with the modal relations between them. In this paper, we propose a simple yet effective solution based on biaffine attention that specifically optimizes against the domain-specific challenges of modal dependency parsing by integrating self-loop. We show that our approach, when coupled with data augmentation by leveraging the Large Language Models to translate annotations from one language to another, outperforms the previous state-of-the-art on English and Chinese datasets by 2% and 4% respectively.
Anthology ID:
2025.findings-acl.1093
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21226–21238
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1093/
DOI:
Bibkey:
Cite (ACL):
Jayeol Chun and Nianwen Xue. 2025. Modal Dependency Parsing via Biaffine Attention with Self-Loop. In Findings of the Association for Computational Linguistics: ACL 2025, pages 21226–21238, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Modal Dependency Parsing via Biaffine Attention with Self-Loop (Chun & Xue, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-acl.1093.pdf