A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing
Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata
Abstract
To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results. This paper explores a strong baseline by integrating existing simple parsing strategies, top-down and bottom-up, with various transformer-based pre-trained language models.The experimental results obtained from two benchmark datasets demonstrate that the parsing performance strongly relies on the pre-trained language models rather than the parsing strategies.In particular, the bottom-up parser achieves large performance gains compared to the current best parser when employing DeBERTa.We further reveal that language models with a span-masking scheme especially boost the parsing performance through our analysis within intra- and multi-sentential parsing, and nuclearity prediction.- Anthology ID:
- 2022.findings-emnlp.501
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6725–6737
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.501
- DOI:
- 10.18653/v1/2022.findings-emnlp.501
- Cite (ACL):
- Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, and Masaaki Nagata. 2022. A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 6725–6737, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- A Simple and Strong Baseline for End-to-End Neural RST-style Discourse Parsing (Kobayashi et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2022.findings-emnlp.501.pdf