Dechuan Teng
2018
Chinese Grammatical Error Diagnosis using Statistical and Prior Knowledge driven Features with Probabilistic Ensemble Enhancement
Ruiji Fu
|
Zhengqi Pei
|
Jiefu Gong
|
Wei Song
|
Dechuan Teng
|
Wanxiang Che
|
Shijin Wang
|
Guoping Hu
|
Ting Liu
Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications
This paper describes our system at NLPTEA-2018 Task #1: Chinese Grammatical Error Diagnosis. Grammatical Error Diagnosis is one of the most challenging NLP tasks, which is to locate grammar errors and tell error types. Our system is built on the model of bidirectional Long Short-Term Memory with a conditional random field layer (BiLSTM-CRF) but integrates with several new features. First, richer features are considered in the BiLSTM-CRF model; second, a probabilistic ensemble approach is adopted; third, Template Matcher are used during a post-processing to bring in human knowledge. In official evaluation, our system obtains the highest F1 scores at identifying error types and locating error positions, the second highest F1 score at sentence level error detection. We also recommend error corrections for specific error types and achieve the best F1 performance among all participants.
2017
The HIT-SCIR System for End-to-End Parsing of Universal Dependencies
Wanxiang Che
|
Jiang Guo
|
Yuxuan Wang
|
Bo Zheng
|
Huaipeng Zhao
|
Yang Liu
|
Dechuan Teng
|
Ting Liu
Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies
This paper describes our system (HIT-SCIR) for the CoNLL 2017 shared task: Multilingual Parsing from Raw Text to Universal Dependencies. Our system includes three pipelined components: tokenization, Part-of-Speech (POS) tagging and dependency parsing. We use character-based bidirectional long short-term memory (LSTM) networks for both tokenization and POS tagging. Afterwards, we employ a list-based transition-based algorithm for general non-projective parsing and present an improved Stack-LSTM-based architecture for representing each transition state and making predictions. Furthermore, to parse low/zero-resource languages and cross-domain data, we use a model transfer approach to make effective use of existing resources. We demonstrate substantial gains against the UDPipe baseline, with an average improvement of 3.76% in LAS of all languages. And finally, we rank the 4th place on the official test sets.
Search
Co-authors
- Wanxiang Che 2
- Ting Liu 2
- Ruiji Fu 1
- Zhengqi Pei 1
- Jiefu Gong 1
- show all...