Abstract
Representation learning is the foundation of machine reading comprehension. In state-of-the-art models, deep learning methods broadly use word and character level representations. However, character is not naturally the minimal linguistic unit. In addition, with a simple concatenation of character and word embedding, previous models actually give suboptimal solution. In this paper, we propose to use subword rather than character for word embedding enhancement. We also empirically explore different augmentation strategies on subword-augmented embedding to enhance the cloze-style reading comprehension model (reader). In detail, we present a reader that uses subword-level representation to augment word embedding with a short list to handle rare words effectively. A thorough examination is conducted to evaluate the comprehensive performance and generalization ability of the proposed reader. Experimental results show that the proposed approach helps the reader significantly outperform the state-of-the-art baselines on various public datasets.- Anthology ID:
- C18-1153
- Volume:
- Proceedings of the 27th International Conference on Computational Linguistics
- Month:
- August
- Year:
- 2018
- Address:
- Santa Fe, New Mexico, USA
- Editors:
- Emily M. Bender, Leon Derczynski, Pierre Isabelle
- Venue:
- COLING
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1802–1814
- Language:
- URL:
- https://aclanthology.org/C18-1153
- DOI:
- Cite (ACL):
- Zhuosheng Zhang, Yafang Huang, and Hai Zhao. 2018. Subword-augmented Embedding for Cloze Reading Comprehension. In Proceedings of the 27th International Conference on Computational Linguistics, pages 1802–1814, Santa Fe, New Mexico, USA. Association for Computational Linguistics.
- Cite (Informal):
- Subword-augmented Embedding for Cloze Reading Comprehension (Zhang et al., COLING 2018)
- PDF:
- https://preview.aclanthology.org/ingest-bitext-workshop/C18-1153.pdf
- Code
- cooelf/subMrc
- Data
- CBT, CMRC 2017