Abstract
The task of modal dependency parsing aims to parse a text into its modal dependency structure, which is a representation for the factuality of events in the text. We design a modal dependency parser that is based on priming pre-trained language models, and evaluate the parser on two data sets. Compared to baselines, we show an improvement of 2.6% in F-score for English and 4.6% for Chinese. To the best of our knowledge, this is also the first work on Chinese modal dependency parsing.- Anthology ID:
- 2022.naacl-main.211
- Volume:
- Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, United States
- Editors:
- Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2913–2919
- Language:
- URL:
- https://aclanthology.org/2022.naacl-main.211
- DOI:
- 10.18653/v1/2022.naacl-main.211
- Cite (ACL):
- Jiarui Yao, Nianwen Xue, and Bonan Min. 2022. Modal Dependency Parsing via Language Model Priming. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2913–2919, Seattle, United States. Association for Computational Linguistics.
- Cite (Informal):
- Modal Dependency Parsing via Language Model Priming (Yao et al., NAACL 2022)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2022.naacl-main.211.pdf
- Code
- jryao/mdp_prompt