2022
pdf
abs
Edinburgh’s Submission to the WMT 2022 Efficiency Task
Nikolay Bogoychev
|
Maximiliana Behnke
|
Jelmer Van Der Linde
|
Graeme Nail
|
Kenneth Heafield
|
Biao Zhang
|
Sidharth Kashyap
Proceedings of the Seventh Conference on Machine Translation (WMT)
We participated in all tracks of the WMT 2022 efficient machine translation task: single-core CPU, multi-core CPU, and GPU hardware with throughput and latency conditions. Our submissions explores a number of several efficiency strategies: knowledge distillation, a simpler simple recurrent unit (SSRU) decoder with one or two layers, shortlisting, deep encoder, shallow decoder, pruning and bidirectional decoder. For the CPU track, we used quantized 8-bit models. For the GPU track, we used FP16 quantisation. We explored various pruning strategies and combination of one or more of the above methods.
2021
pdf
abs
Efficient Machine Translation with Model Pruning and Quantization
Maximiliana Behnke
|
Nikolay Bogoychev
|
Alham Fikri Aji
|
Kenneth Heafield
|
Graeme Nail
|
Qianqian Zhu
|
Svetlana Tchistiakova
|
Jelmer van der Linde
|
Pinzhen Chen
|
Sidharth Kashyap
|
Roman Grundkiewicz
Proceedings of the Sixth Conference on Machine Translation
We participated in all tracks of the WMT 2021 efficient machine translation task: single-core CPU, multi-core CPU, and GPU hardware with throughput and latency conditions. Our submissions combine several efficiency strategies: knowledge distillation, a simpler simple recurrent unit (SSRU) decoder with one or two layers, lexical shortlists, smaller numerical formats, and pruning. For the CPU track, we used quantized 8-bit models. For the GPU track, we experimented with FP16 and 8-bit integers in tensorcores. Some of our submissions optimize for size via 4-bit log quantization and omitting a lexical shortlist. We have extended pruning to more parts of the network, emphasizing component- and block-level pruning that actually improves speed unlike coefficient-wise pruning.
pdf
abs
Pruning Neural Machine Translation for Speed Using Group Lasso
Maximiliana Behnke
|
Kenneth Heafield
Proceedings of the Sixth Conference on Machine Translation
Unlike most work on pruning neural networks, we make inference faster. Group lasso regularisation enables pruning entire rows, columns or blocks of parameters that result in a smaller dense network. Because the network is still dense, efficient matrix multiply routines are still used and only minimal software changes are required to support variable layer sizes. Moreover, pruning is applied during training so there is no separate pruning step. Experiments on top of English->German models, which already have state-of-the-art speed and size, show that two-thirds of feedforward connections can be removed with 0.2 BLEU loss. With 6 decoder layers, the pruned model is 34% faster; with 2 tied decoder layers, the pruned model is 14% faster. Pruning entire heads and feedforward connections in a 12–1 encoder-decoder architecture gains an additional 51% speed-up. These push the Pareto frontier with respect to the trade-off between time and quality compared to strong baselines. In the WMT 2021 Efficiency Task, our pruned and quantised models are 1.9–2.7x faster at the cost 0.9–1.7 BLEU in comparison to the unoptimised baselines. Across language pairs, we see similar sparsity patterns: an ascending or U-shaped distribution in encoder feedforward and attention layers and an ascending distribution in the decoder.
2020
pdf
abs
Edinburgh’s Submissions to the 2020 Machine Translation Efficiency Task
Nikolay Bogoychev
|
Roman Grundkiewicz
|
Alham Fikri Aji
|
Maximiliana Behnke
|
Kenneth Heafield
|
Sidharth Kashyap
|
Emmanouil-Ioannis Farsarakis
|
Mateusz Chudyk
Proceedings of the Fourth Workshop on Neural Generation and Translation
We participated in all tracks of the Workshop on Neural Generation and Translation 2020 Efficiency Shared Task: single-core CPU, multi-core CPU, and GPU. At the model level, we use teacher-student training with a variety of student sizes, tie embeddings and sometimes layers, use the Simpler Simple Recurrent Unit, and introduce head pruning. On GPUs, we used 16-bit floating-point tensor cores. On CPUs, we customized 8-bit quantization and multiple processes with affinity for the multi-core setting. To reduce model size, we experimented with 4-bit log quantization but use floats at runtime. In the shared task, most of our submissions were Pareto optimal with respect the trade-off between time and quality.
pdf
abs
Losing Heads in the Lottery: Pruning Transformer Attention in Neural Machine Translation
Maximiliana Behnke
|
Kenneth Heafield
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The attention mechanism is the crucial component of the transformer architecture. Recent research shows that most attention heads are not confident in their decisions and can be pruned. However, removing them before training a model results in lower quality. In this paper, we apply the lottery ticket hypothesis to prune heads in the early stages of training. Our experiments on machine translation show that it is possible to remove up to three-quarters of attention heads from transformer-big during early training with an average -0.1 change in BLEU for Turkish→English. The pruned model is 1.5 times as fast at inference, albeit at the cost of longer training. Our method is complementary to other approaches, such as teacher-student, with English→German student model gaining an additional 10% speed-up with 75% encoder attention removed and 0.2 BLEU loss.
2018
pdf
Improving Machine Translation of Educational Content via Crowdsourcing
Maximiliana Behnke
|
Antonio Valerio Miceli Barone
|
Rico Sennrich
|
Vilelmini Sosoni
|
Thanasis Naskos
|
Eirini Takoulidou
|
Maria Stasimioti
|
Menno van Zaanen
|
Sheila Castilho
|
Federico Gaspari
|
Panayota Georgakopoulou
|
Valia Kordoni
|
Markus Egg
|
Katia Lida Kermanidis
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)