Llamipa: An Incremental Discourse Parser

Kate Thompson, Akshay Chaturvedi, Julie Hunter, Nicholas Asher


Abstract
This paper provides the first discourse parsing experiments with a large language model (LLM) finetuned on corpora annotated in the style of SDRT (Segmented Discourse Representation Theory, Asher (1993), Asher and Lascarides (2003)). The result is a discourse parser, Llamipa (Llama Incremental Parser), that leverages discourse context, leading to substantial performance gains over approaches that use encoder-only models to provide local, context-sensitive representations of discourse units. Furthermore, it is able to process discourse data incrementally, which is essential for the eventual use of discourse information in downstream tasks.
Anthology ID:
2024.findings-emnlp.373
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2024
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6418–6430
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.373/
DOI:
10.18653/v1/2024.findings-emnlp.373
Bibkey:
Cite (ACL):
Kate Thompson, Akshay Chaturvedi, Julie Hunter, and Nicholas Asher. 2024. Llamipa: An Incremental Discourse Parser. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6418–6430, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
Llamipa: An Incremental Discourse Parser (Thompson et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2024.findings-emnlp.373.pdf