Chinese Discourse Parsing: Model and Evaluation
Lin Chuan-An, Shyh-Shiun Hung, Hen-Hsen Huang, Hsin-Hsi Chen
Abstract
Chinese discourse parsing, which aims to identify the hierarchical relationships of Chinese elementary discourse units, has not yet a consistent evaluation metric. Although Parseval is commonly used, variations of evaluation differ from three aspects: micro vs. macro F1 scores, binary vs. multiway ground truth, and left-heavy vs. right-heavy binarization. In this paper, we first propose a neural network model that unifies a pre-trained transformer and CKY-like algorithm, and then compare it with the previous models with different evaluation scenarios. The experimental results show that our model outperforms the previous systems. We conclude that (1) the pre-trained context embedding provides effective solutions to deal with implicit semantics in Chinese texts, and (2) using multiway ground truth is helpful since different binarization approaches lead to significant differences in performance.- Anthology ID:
- 2020.lrec-1.128
- Volume:
- Proceedings of the Twelfth Language Resources and Evaluation Conference
- Month:
- May
- Year:
- 2020
- Address:
- Marseille, France
- Editors:
- Nicoletta Calzolari, Frédéric Béchet, Philippe Blache, Khalid Choukri, Christopher Cieri, Thierry Declerck, Sara Goggi, Hitoshi Isahara, Bente Maegaard, Joseph Mariani, Hélène Mazo, Asuncion Moreno, Jan Odijk, Stelios Piperidis
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 1019–1024
- Language:
- English
- URL:
- https://aclanthology.org/2020.lrec-1.128
- DOI:
- Cite (ACL):
- Lin Chuan-An, Shyh-Shiun Hung, Hen-Hsen Huang, and Hsin-Hsi Chen. 2020. Chinese Discourse Parsing: Model and Evaluation. In Proceedings of the Twelfth Language Resources and Evaluation Conference, pages 1019–1024, Marseille, France. European Language Resources Association.
- Cite (Informal):
- Chinese Discourse Parsing: Model and Evaluation (Chuan-An et al., LREC 2020)
- PDF:
- https://preview.aclanthology.org/naacl-24-ws-corrections/2020.lrec-1.128.pdf