Edinburgh’s Submission to the WMT 2022 Efficiency Task
Nikolay Bogoychev, Maximiliana Behnke, Jelmer Van Der Linde, Graeme Nail, Kenneth Heafield, Biao Zhang, Sidharth Kashyap
Abstract
We participated in all tracks of the WMT 2022 efficient machine translation task: single-core CPU, multi-core CPU, and GPU hardware with throughput and latency conditions. Our submissions explores a number of several efficiency strategies: knowledge distillation, a simpler simple recurrent unit (SSRU) decoder with one or two layers, shortlisting, deep encoder, shallow decoder, pruning and bidirectional decoder. For the CPU track, we used quantized 8-bit models. For the GPU track, we used FP16 quantisation. We explored various pruning strategies and combination of one or more of the above methods.- Anthology ID:
- 2022.wmt-1.63
- Volume:
- Proceedings of the Seventh Conference on Machine Translation (WMT)
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates (Hybrid)
- Venue:
- WMT
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 661–667
- Language:
- URL:
- https://aclanthology.org/2022.wmt-1.63
- DOI:
- Cite (ACL):
- Nikolay Bogoychev, Maximiliana Behnke, Jelmer Van Der Linde, Graeme Nail, Kenneth Heafield, Biao Zhang, and Sidharth Kashyap. 2022. Edinburgh’s Submission to the WMT 2022 Efficiency Task. In Proceedings of the Seventh Conference on Machine Translation (WMT), pages 661–667, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
- Cite (Informal):
- Edinburgh’s Submission to the WMT 2022 Efficiency Task (Bogoychev et al., WMT 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.wmt-1.63.pdf