Chenjie Cao
2020
SiBert: Enhanced Chinese Pre-trained Language Model with Sentence Insertion
Jiahao Chen
|
Chenjie Cao
|
Xiuyan Jiang
Proceedings of the Twelfth Language Resources and Evaluation Conference
Pre-trained models have achieved great success in learning unsupervised language representations by self-supervised tasks on large-scale corpora. Recent studies mainly focus on how to fine-tune different downstream tasks from a general pre-trained model. However, some studies show that customized self-supervised tasks for a particular type of downstream task can effectively help the pre-trained model to capture more corresponding knowledge and semantic information. Hence a new pre-training task called Sentence Insertion (SI) is proposed in this paper for Chinese query-passage pairs NLP tasks including answer span prediction, retrieval question answering and sentence level cloze test. The related experiment results indicate that the proposed SI can improve the performance of the Chinese Pre-trained models significantly. Moreover, a word segmentation method called SentencePiece is utilized to further enhance Chinese Bert performance for tasks with long texts. The complete source code is available at https://github.com/ewrfcas/SiBert_tensorflow.
CLUE: A Chinese Language Understanding Evaluation Benchmark
Liang Xu
|
Hai Hu
|
Xuanwei Zhang
|
Lu Li
|
Chenjie Cao
|
Yudong Li
|
Yechen Xu
|
Kai Sun
|
Dian Yu
|
Cong Yu
|
Yin Tian
|
Qianqian Dong
|
Weitang Liu
|
Bo Shi
|
Yiming Cui
|
Junyi Li
|
Jun Zeng
|
Rongzhao Wang
|
Weijian Xie
|
Yanting Li
|
Yina Patterson
|
Zuoyu Tian
|
Yiwen Zhang
|
He Zhou
|
Shaoweihua Liu
|
Zhe Zhao
|
Qipeng Zhao
|
Cong Yue
|
Xinrui Zhang
|
Zhengliang Yang
|
Kyle Richardson
|
Zhenzhong Lan
Proceedings of the 28th International Conference on Computational Linguistics
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks. These comprehensive benchmarks have facilitated a broad range of research and applications in natural language processing (NLP). The problem, however, is that most such benchmarks are limited to English, which has made it difficult to replicate many of the successes in English NLU for other languages. To help remedy this issue, we introduce the first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text. To establish results on these tasks, we report scores using an exhaustive set of current state-of-the-art pre-trained Chinese models (9 in total). We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on Chinese NLU. Our benchmark is released at https://www.cluebenchmarks.com
Search
Co-authors
- Jiahao Chen 1
- Xiuyan Jiang 1
- Liang Xu 1
- Hai Hu 1
- Xuanwei Zhang 1
- show all...
- Lu Li 1
- Yudong Li 1
- Yechen Xu 1
- Kai Sun 1
- Dian Yu 1
- Cong Yu 1
- Yin Tian 1
- Qianqian Dong 1
- Weitang Liu 1
- Bo Shi 1
- Yiming Cui 1
- Junyi Li 1
- Jun Zeng 1
- Rongzhao Wang 1
- Weijian Xie 1
- Yanting Li 1
- Yina Patterson 1
- Zuoyu Tian 1
- Yiwen Zhang 1
- He Zhou 1
- Shaoweihua Liu 1
- Zhe Zhao 1
- Qipeng Zhao 1
- Cong Yue 1
- Xinrui Zhang 1
- Zhengliang Yang 1
- Kyle Richardson 1
- Zhenzhong Lan 1