Discourse Relation Recognition with Language Models Under Different Data Availability

Shuhaib Mehri, Chuyuan Li, Giuseppe Carenini


Abstract
Large Language Models (LLMs) have demonstrated remarkable performance across various NLP tasks, yet they continue to face challenges in discourse relation recognition (DRR). Current state-of-the-art methods for DRR primarily rely on smaller pre-trained language models (PLMs). In this study, we conduct a comprehensive analysis of different approaches using both PLMs and LLMs, evaluating their effectiveness for DRR at multiple granularities and under different data availability settings. Our findings indicate that no single approach consistently outperforms the others, and we offer a general comparison framework to guide the selection of the most appropriate model based on specific DRR requirements and data conditions.
Anthology ID:
2025.codi-1.13
Volume:
Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Michael Strube, Chloe Braud, Christian Hardmeier, Junyi Jessy Li, Sharid Loaiciga, Amir Zeldes, Chuyuan Li
Venues:
CODI | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
148–156
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.codi-1.13/
DOI:
Bibkey:
Cite (ACL):
Shuhaib Mehri, Chuyuan Li, and Giuseppe Carenini. 2025. Discourse Relation Recognition with Language Models Under Different Data Availability. In Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025), pages 148–156, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Discourse Relation Recognition with Language Models Under Different Data Availability (Mehri et al., CODI 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.codi-1.13.pdf