Proceedings of the 3rd Shared Task on Discourse Relation Parsing and Treebanking (DISRPT 2023)

Chloé Braud, Yang Janet Liu, Eleni Metheniti, Philippe Muller, Laura Rivière, Attapol Rutherford, Amir Zeldes (Editors)


Anthology ID:
2023.disrpt-1
Month:
July
Year:
2023
Address:
Toronto, Canada
Venue:
DISRPT
SIG:
Publisher:
The Association for Computational Linguistics
URL:
https://aclanthology.org/2023.disrpt-1
DOI:
Bib Export formats:
BibTeX
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2023.disrpt-1.pdf

pdf bib
Proceedings of the 3rd Shared Task on Discourse Relation Parsing and Treebanking (DISRPT 2023)
Chloé Braud | Yang Janet Liu | Eleni Metheniti | Philippe Muller | Laura Rivière | Attapol Rutherford | Amir Zeldes

pdf bib
The DISRPT 2023 Shared Task on Elementary Discourse Unit Segmentation, Connective Detection, and Relation Classification
Chloé Braud | Yang Janet Liu | Eleni Metheniti | Philippe Muller | Laura Rivière | Attapol Rutherford | Amir Zeldes

In 2023, the third iteration of the DISRPT Shared Task (Discourse Relation Parsing and Treebanking) was held, dedicated to the underlying units used in discourse parsing across formalisms. Following the success of the 2019and 2021 tasks on Elementary Discourse Unit Segmentation, Connective Detection, and Relation Classification, this iteration has added 10 new corpora, including 2 new languages (Thai and Italian) and 3 discourse treebanks annotated in the discourse dependency representation in addition to the previously included frameworks: RST, SDRT, and PDTB. In this paper, we review the data included in the Shared Task, which covers 26 datasets across 13 languages, survey and compare submitted systems, and report on system performance on each task for both annotated and plain-tokenized versions of the data.

pdf bib
DiscoFlan: Instruction Fine-tuning and Refined Text Generation for Discourse Relation Label Classification
Kaveri Anuranjana

This paper introduces DiscoFlan, a multilingual discourse relation classifier submitted for DISRPT 2023. Our submission represents the first attempt at building a multilingual discourse relation classifier for the DISRPT 2023 shared task. By our model addresses the issue to mismatches caused by hallucination in a seq2seq model by utilizing the label distribution information for label generation. In contrast to the previous state-of-the-art model, our approach eliminates the need for hand-crafted features in computing the discourse relation classes. Furthermore, we propose a novel label generation mechanism that anchors the labels to a fixed set by selectively enhancing training on the decoder model. Our experimental results demonstrate that our model surpasses the current state-of-the-art performance in 11 out of the 26 datasets considered, however the submitted model compatible with provided evaluation scripts is better in 7 out of 26 considered datasets, while demonstrating competitive results in the rest. Overall, DiscoFlan showcases promising advancements in multilingual discourse relation classification for the DISRPT 2023 shared task.

pdf
DisCut and DiscReT: MELODI at DISRPT 2023
Eleni Metheniti | Chloé Braud | Philippe Muller | Laura Rivière

This paper presents the results obtained by the MELODI team for the three tasks proposed within the DISRPT 2023 shared task on discourse: segmentation, connective identification, and relation classification. The competition involves corpora in various languages in several underlying frameworks, and proposes two tracks depending on the presence or not of annotations of sentence boundaries and syntactic information. For these three tasks, we rely on a transformer-based architecture, and investigate several optimizations of the models, including hyper-parameter search and layer freezing. For discourse relations, we also explore the use of adapters—a lightweight solution for model fine-tuning—and introduce relation mappings to partially deal with the label set explosion we are facing within the setting of the shared task in a multi-corpus perspective. In the end, we propose one single architecture for segmentation and connectives, based on XLM-RoBERTa large, freezed at lower layers, with new state-of-the-art results for segmentation, and we propose 3 different models for relations, since the task makes it harder to generalize across all corpora.

pdf
HITS at DISRPT 2023: Discourse Segmentation, Connective Detection, and Relation Classification
Wei Liu | Yi Fan | Michael Strube

HITS participated in the Discourse Segmentation (DS, Task 1) and Connective Detection (CD, Task 2) tasks at the DISRPT 2023. Task 1 focuses on segmenting the text into discourse units, while Task 2 aims to detect the discourse connectives. We deployed a framework based on different pre-trained models according to the target language for these two tasks.HITS also participated in the Relation Classification track (Task 3). The main task was recognizing the discourse relation between text spans from different languages. We designed a joint model for languages with a small corpus while separate models for large corpora. The adversarial training strategy is applied to enhance the robustness of relation classifiers.