Abstract
Many NLP models operate over sequences of subword tokens produced by hand-crafted tokenization rules and heuristic subword induction algorithms. A simple universal alternative is to represent every computerized text as a sequence of bytes via UTF-8, obviating the need for an embedding layer since there are fewer token types (256) than dimensions. Surprisingly, replacing the ubiquitous embedding layer with one-hot representations of each byte does not hurt performance; experiments on byte-to-byte machine translation from English to 10 different languages show a consistent improvement in BLEU, rivaling character-level and even standard subword-level models. A deeper investigation reveals that the combination of embeddingless models with decoder-input dropout amounts to token dropout, which benefits byte-to-byte models in particular.- Anthology ID:
- 2021.naacl-main.17
- Volume:
- Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
- Month:
- June
- Year:
- 2021
- Address:
- Online
- Editors:
- Kristina Toutanova, Anna Rumshisky, Luke Zettlemoyer, Dilek Hakkani-Tur, Iz Beltagy, Steven Bethard, Ryan Cotterell, Tanmoy Chakraborty, Yichao Zhou
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 181–186
- Language:
- URL:
- https://aclanthology.org/2021.naacl-main.17
- DOI:
- 10.18653/v1/2021.naacl-main.17
- Cite (ACL):
- Uri Shaham and Omer Levy. 2021. Neural Machine Translation without Embeddings. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 181–186, Online. Association for Computational Linguistics.
- Cite (Informal):
- Neural Machine Translation without Embeddings (Shaham & Levy, NAACL 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-2/2021.naacl-main.17.pdf
- Code
- UriSha/EmbeddinglessNMT + additional community code