Prompting Implicit Discourse Relation Annotation

Frances Yung, Mansoor Ahmad, Merel Scholman, Vera Demberg


Abstract
Pre-trained large language models, such as ChatGPT, archive outstanding performance in various reasoning tasks without supervised training and were found to have outperformed crowdsourcing workers. Nonetheless, ChatGPT’s performance in the task of implicit discourse relation classification, prompted by a standard multiple-choice question, is still far from satisfactory and considerably inferior to state-of-the-art supervised approaches. This work investigates several proven prompting techniques to improve ChatGPT’s recognition of discourse relations. In particular, we experimented with breaking down the classification task that involves numerous abstract labels into smaller subtasks. Nonetheless, experiment results show that the inference accuracy hardly changes even with sophisticated prompt engineering, suggesting that implicit discourse relation classification is not yet resolvable under zero-shot or few-shot settings.
Anthology ID:
2024.law-1.15
Volume:
Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII)
Month:
March
Year:
2024
Address:
St. Julians, Malta
Editors:
Sophie Henning, Manfred Stede
Venues:
LAW | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
150–165
Language:
URL:
https://aclanthology.org/2024.law-1.15
DOI:
Bibkey:
Cite (ACL):
Frances Yung, Mansoor Ahmad, Merel Scholman, and Vera Demberg. 2024. Prompting Implicit Discourse Relation Annotation. In Proceedings of The 18th Linguistic Annotation Workshop (LAW-XVIII), pages 150–165, St. Julians, Malta. Association for Computational Linguistics.
Cite (Informal):
Prompting Implicit Discourse Relation Annotation (Yung et al., LAW-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2024.law-1.15.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2024.law-1.15.mp4