RethinkCWS: Is Chinese Word Segmentation a Solved Task?

Jinlan Fu, Pengfei Liu, Qi Zhang, Xuanjing Huang


Abstract
The performance of the Chinese Word Segmentation (CWS) systems has gradually reached a plateau with the rapid development of deep neural networks, especially the successful use of large pre-trained models. In this paper, we take stock of what we have achieved and rethink what’s left in the CWS task. Methodologically, we propose a fine-grained evaluation for existing CWS systems, which not only allows us to diagnose the strengths and weaknesses of existing models (under the in-dataset setting), but enables us to quantify the discrepancy between different criterion and alleviate the negative transfer problem when doing multi-criteria learning. Strategically, despite not aiming to propose a novel model in this paper, our comprehensive experiments on eight models and seven datasets, as well as thorough analysis, could search for some promising direction for future research. We make all codes publicly available and release an interface that can quickly evaluate and diagnose user’s models: https://github.com/neulab/InterpretEval
Anthology ID:
2020.emnlp-main.457
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5676–5686
Language:
URL:
https://aclanthology.org/2020.emnlp-main.457
DOI:
10.18653/v1/2020.emnlp-main.457
Bibkey:
Cite (ACL):
Jinlan Fu, Pengfei Liu, Qi Zhang, and Xuanjing Huang. 2020. RethinkCWS: Is Chinese Word Segmentation a Solved Task?. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5676–5686, Online. Association for Computational Linguistics.
Cite (Informal):
RethinkCWS: Is Chinese Word Segmentation a Solved Task? (Fu et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/2020.emnlp-main.457.pdf
Video:
 https://slideslive.com/38939384
Code
 neulab/InterpretEval