Abstract
We present a memory augmented neural network for natural language understanding: Neural Semantic Encoders. NSE is equipped with a novel memory update rule and has a variable sized encoding memory that evolves over time and maintains the understanding of input sequences through read, compose and write operations. NSE can also access 1 multiple and shared memories. In this paper, we demonstrated the effectiveness and the flexibility of NSE on five different natural language tasks: natural language inference, question answering, sentence classification, document sentiment analysis and machine translation where NSE achieved state-of-the-art performance when evaluated on publically available benchmarks. For example, our shared-memory model showed an encouraging result on neural machine translation, improving an attention-based baseline by approximately 1.0 BLEU.- Anthology ID:
- E17-1038
- Volume:
- Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
- Month:
- April
- Year:
- 2017
- Address:
- Valencia, Spain
- Editors:
- Mirella Lapata, Phil Blunsom, Alexander Koller
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 397–407
- Language:
- URL:
- https://aclanthology.org/E17-1038
- DOI:
- Cite (ACL):
- Tsendsuren Munkhdalai and Hong Yu. 2017. Neural Semantic Encoders. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 397–407, Valencia, Spain. Association for Computational Linguistics.
- Cite (Informal):
- Neural Semantic Encoders (Munkhdalai & Yu, EACL 2017)
- PDF:
- https://preview.aclanthology.org/naacl24-info/E17-1038.pdf
- Code
- tsendeemts/nse + additional community code
- Data
- SNLI, SST, SST-2, WMT 2014, WikiQA