Can we obtain significant success in RST discourse parsing by using Large Language Models?

Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura


Abstract
Recently, decoder-only pre-trained large language models (LLMs), with several tens of billion parameters, have significantly impacted a wide range of natural language processing (NLP) tasks. While encoder-only or encoder-decoder pre-trained language models have already proved to be effective in discourse parsing, the extent to which LLMs can perform this task remains an open research question. Therefore, this paper explores how beneficial such LLMs are for Rhetorical Structure Theory (RST) discourse parsing. Here, the parsing process for both fundamental top-down and bottom-up strategies is converted into prompts, which LLMs can work with. We employ Llama 2 and fine-tune it with QLoRA, which has fewer parameters that can be tuned. Experimental results on three benchmark datasets, RST-DT, Instr-DT, and the GUM corpus, demonstrate that Llama 2 with 70 billion parameters in the bottom-up strategy obtained state-of-the-art (SOTA) results with significant differences. Furthermore, our parsers demonstrated generalizability when evaluated on RST-DT, showing that, in spite of being trained with the GUM corpus, it obtained similar performances to those of existing parsers trained with RST-DT.
Anthology ID:
2024.eacl-long.171
Volume:
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2024
Address:
St. Julian’s, Malta
Editors:
Yvette Graham, Matthew Purver
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2803–2815
Language:
URL:
https://aclanthology.org/2024.eacl-long.171
DOI:
Bibkey:
Cite (ACL):
Aru Maekawa, Tsutomu Hirao, Hidetaka Kamigaito, and Manabu Okumura. 2024. Can we obtain significant success in RST discourse parsing by using Large Language Models?. In Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2803–2815, St. Julian’s, Malta. Association for Computational Linguistics.
Cite (Informal):
Can we obtain significant success in RST discourse parsing by using Large Language Models? (Maekawa et al., EACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-3/2024.eacl-long.171.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-3/2024.eacl-long.171.mp4