Abstract
Softmax is the de facto standard for normalizing logits in modern neural networks for language processing. However, by producing a dense probability distribution each token in the vocabulary has a nonzero chance of being selected at each generation step, leading to a variety of reported problems in text generation. 𝛼-entmax of Peters et al. (2019) solves this problem, but is unfortunately slower than softmax. In this paper, we propose an alternative to 𝛼-entmax, which keeps its virtuous characteristics, but is as fast as optimized softmax and achieves on par or better performance in machine translation task.- Anthology ID:
- 2022.findings-naacl.86
- Volume:
- Findings of the Association for Computational Linguistics: NAACL 2022
- Month:
- July
- Year:
- 2022
- Address:
- Seattle, United States
- Editors:
- Marine Carpuat, Marie-Catherine de Marneffe, Ivan Vladimir Meza Ruiz
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1142–1158
- Language:
- URL:
- https://aclanthology.org/2022.findings-naacl.86
- DOI:
- 10.18653/v1/2022.findings-naacl.86
- Cite (ACL):
- Maxat Tezekbayev, Vassilina Nikoulina, Matthias Gallé, and Zhenisbek Assylbekov. 2022. Speeding Up Entmax. In Findings of the Association for Computational Linguistics: NAACL 2022, pages 1142–1158, Seattle, United States. Association for Computational Linguistics.
- Cite (Informal):
- Speeding Up Entmax (Tezekbayev et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/revert-3132-ingestion-checklist/2022.findings-naacl.86.pdf
- Code
- maxattezekbayev/alpha-relu
- Data
- WMT 2014