Iterative Layer Pruning for Efficient Translation Inference

Yasmin Moslem, Muhammad Hazim Al Farouq, John Kelleher


Abstract
Large language models (LLMs) have transformed many areas of natural language processing, including machine translation. However, efficient deployment of LLMs remains challenging due to their intensive computational requirements. In this paper, we address this challenge and present our submissions to the Model Compression track at the Conference on Machine Translation (WMT 2025). In our experiments, we investigate iterative layer pruning guided by layer importance analysis. We evaluate this method using the Aya-Expanse-8B model for translation from Czech to German, and from English to Egyptian Arabic. Our approach achieves substantial reductions in model size and inference time, while maintaining the translation quality of the baseline models.
Anthology ID:
2025.wmt-1.78
Volume:
Proceedings of the Tenth Conference on Machine Translation
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Barry Haddow, Tom Kocmi, Philipp Koehn, Christof Monz
Venue:
WMT
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1022–1027
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.wmt-1.78/
DOI:
Bibkey:
Cite (ACL):
Yasmin Moslem, Muhammad Hazim Al Farouq, and John Kelleher. 2025. Iterative Layer Pruning for Efficient Translation Inference. In Proceedings of the Tenth Conference on Machine Translation, pages 1022–1027, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Iterative Layer Pruning for Efficient Translation Inference (Moslem et al., WMT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.wmt-1.78.pdf