On the Role of Context for Discourse Relation Classification in Scientific Writing

Stephen Wan, Wei Liu, Michael Strube


Abstract
With the increasing use of generative Artificial Intelligence (AI) methods to support science workflows, we are interested in the use of discourse-level information to find supporting evidence for AI generated scientific claims. A first step towards this objective is to examine the task of inferring discourse structure in scientific writing.In this work, we present a preliminary investigation of pretrained language model (PLM) and Large Language Model (LLM) approaches for Discourse Relation Classification (DRC), focusing on scientific publications, an under-studied genre for this task. We examine how context can help with the DRC task, with our experiments showing that context, as defined by discourse structure, is generally helpful. We also present an analysis of which scientific discourse relation types might benefit most from context.
Anthology ID:
2025.codi-1.8
Volume:
Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Michael Strube, Chloe Braud, Christian Hardmeier, Junyi Jessy Li, Sharid Loaiciga, Amir Zeldes, Chuyuan Li
Venues:
CODI | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
96–106
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.codi-1.8/
DOI:
Bibkey:
Cite (ACL):
Stephen Wan, Wei Liu, and Michael Strube. 2025. On the Role of Context for Discourse Relation Classification in Scientific Writing. In Proceedings of the 6th Workshop on Computational Approaches to Discourse, Context and Document-Level Inferences (CODI 2025), pages 96–106, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
On the Role of Context for Discourse Relation Classification in Scientific Writing (Wan et al., CODI 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.codi-1.8.pdf