Named Entity Recognition with Stack Residual LSTM and Trainable Bias Decoding

Quan Tran, Andrew MacKinlay, Antonio Jimeno Yepes


Abstract
Recurrent Neural Network models are the state-of-the-art for Named Entity Recognition (NER). We present two innovations to improve the performance of these models. The first innovation is the introduction of residual connections between the Stacked Recurrent Neural Network model to address the degradation problem of deep neural networks. The second innovation is a bias decoding mechanism that allows the trained system to adapt to non-differentiable and externally computed objectives, such as the entity-based F-measure. Our work improves the state-of-the-art results for both Spanish and English languages on the standard train/development/test split of the CoNLL 2003 Shared Task NER dataset.
Anthology ID:
I17-1057
Volume:
Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
November
Year:
2017
Address:
Taipei, Taiwan
Editors:
Greg Kondrak, Taro Watanabe
Venue:
IJCNLP
SIG:
Publisher:
Asian Federation of Natural Language Processing
Note:
Pages:
566–575
Language:
URL:
https://aclanthology.org/I17-1057
DOI:
Bibkey:
Cite (ACL):
Quan Tran, Andrew MacKinlay, and Antonio Jimeno Yepes. 2017. Named Entity Recognition with Stack Residual LSTM and Trainable Bias Decoding. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 566–575, Taipei, Taiwan. Asian Federation of Natural Language Processing.
Cite (Informal):
Named Entity Recognition with Stack Residual LSTM and Trainable Bias Decoding (Tran et al., IJCNLP 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/I17-1057.pdf
Data
Billion Word BenchmarkCoNLL 2003One Billion Word Benchmark