Abstract
The research described in this paper examines how to learn linguistic knowledge associated with discourse relations from unlabeled corpora. We introduce an unsupervised learning method on text coherence that could produce numerical representations that improve implicit discourse relation recognition in a semi-supervised manner. We also empirically examine two variants of coherence modeling: order-oriented and topic-oriented negative sampling, showing that, of the two, topic-oriented negative sampling tends to be more effective.- Anthology ID:
- W18-5040
- Volume:
- Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue
- Month:
- July
- Year:
- 2018
- Address:
- Melbourne, Australia
- Editors:
- Kazunori Komatani, Diane Litman, Kai Yu, Alex Papangelis, Lawrence Cavedon, Mikio Nakano
- Venue:
- SIGDIAL
- SIG:
- SIGDIAL
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 344–349
- Language:
- URL:
- https://aclanthology.org/W18-5040
- DOI:
- 10.18653/v1/W18-5040
- Cite (ACL):
- Noriki Nishida and Hideki Nakayama. 2018. Coherence Modeling Improves Implicit Discourse Relation Recognition. In Proceedings of the 19th Annual SIGdial Meeting on Discourse and Dialogue, pages 344–349, Melbourne, Australia. Association for Computational Linguistics.
- Cite (Informal):
- Coherence Modeling Improves Implicit Discourse Relation Recognition (Nishida & Nakayama, SIGDIAL 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/W18-5040.pdf