Learning Better Internal Structure of Words for Sequence Labeling

Yingwei Xin, Ethan Hart, Vibhuti Mahajan, Jean-David Ruvini


Abstract
Character-based neural models have recently proven very useful for many NLP tasks. However, there is a gap of sophistication between methods for learning representations of sentences and words. While, most character models for learning representations of sentences are deep and complex, models for learning representations of words are shallow and simple. Also, in spite of considerable research on learning character embeddings, it is still not clear which kind of architecture is the best for capturing character-to-word representations. To address these questions, we first investigate the gaps between methods for learning word and sentence representations. We conduct detailed experiments and comparisons on different state-of-the-art convolutional models, and also investigate the advantages and disadvantages of their constituents. Furthermore, we propose IntNet, a funnel-shaped wide convolutional neural architecture with no down-sampling for learning representations of the internal structure of words by composing their characters from limited, supervised training corpora. We evaluate our proposed model on six sequence labeling datasets, including named entity recognition, part-of-speech tagging, and syntactic chunking. Our in-depth analysis shows that IntNet significantly outperforms other character embedding models and obtains new state-of-the-art performance without relying on any external knowledge or resources.
Anthology ID:
D18-1279
Volume:
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
Month:
October-November
Year:
2018
Address:
Brussels, Belgium
Venue:
EMNLP
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
2584–2593
Language:
URL:
https://aclanthology.org/D18-1279
DOI:
10.18653/v1/D18-1279
Bibkey:
Cite (ACL):
Yingwei Xin, Ethan Hart, Vibhuti Mahajan, and Jean-David Ruvini. 2018. Learning Better Internal Structure of Words for Sequence Labeling. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2584–2593, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Learning Better Internal Structure of Words for Sequence Labeling (Xin et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/D18-1279.pdf
Data
CoNLL-2003Penn Treebank