Abstract
This paper describes the Notre Dame Natural Language Processing Group’s (NDNLP) submission to the WNGT 2019 shared task (Hayashi et al., 2019). We investigated the impact of auto-sizing (Murray and Chiang, 2015; Murray et al., 2019) to the Transformer network (Vaswani et al., 2017) with the goal of substantially reducing the number of parameters in the model. Our method was able to eliminate more than 25% of the model’s parameters while suffering a decrease of only 1.1 BLEU.- Anthology ID:
- D19-5634
- Volume:
- Proceedings of the 3rd Workshop on Neural Generation and Translation
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong
- Venue:
- NGT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 297–301
- Language:
- URL:
- https://aclanthology.org/D19-5634
- DOI:
- 10.18653/v1/D19-5634
- Cite (ACL):
- Kenton Murray, Brian DuSell, and David Chiang. 2019. Efficiency through Auto-Sizing: Notre Dame NLP’s Submission to the WNGT 2019 Efficiency Task. In Proceedings of the 3rd Workshop on Neural Generation and Translation, pages 297–301, Hong Kong. Association for Computational Linguistics.
- Cite (Informal):
- Efficiency through Auto-Sizing: Notre Dame NLP’s Submission to the WNGT 2019 Efficiency Task (Murray et al., NGT 2019)
- PDF:
- https://preview.aclanthology.org/nodalida-main-page/D19-5634.pdf