Using Explicit Discourse Connectives in Translation for Implicit Discourse Relation Classification

Wei Shi, Frances Yung, Raphael Rubino, Vera Demberg


Abstract
Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.
Anthology ID:
I17-1049
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
484–495
Language:
URL:
https://aclanthology.org/I17-1049
DOI:
Bibkey:
Cite (ACL):
Wei Shi, Frances Yung, Raphael Rubino, and Vera Demberg. 2017. Using Explicit Discourse Connectives in Translation for Implicit Discourse Relation Classification. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 484–495, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Using Explicit Discourse Connectives in Translation for Implicit Discourse Relation Classification (Shi et al., IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/I17-1049.pdf