Bag of Tricks for Optimizing Transformer Efficiency

Ye Lin, Yanyang Li, Tong Xiao, Jingbo Zhu


Abstract
Improving Transformer efficiency has become increasingly attractive recently. A wide range of methods has been proposed, e.g., pruning, quantization, new architectures and etc. But these methods are either sophisticated in implementation or dependent on hardware. In this paper, we show that the efficiency of Transformer can be improved by combining some simple and hardware-agnostic methods, including tuning hyper-parameters, better design choices and training strategies. On the WMT news translation tasks, we improve the inference efficiency of a strong Transformer system by 3.80x on CPU and 2.52x on GPU.
Anthology ID:
2021.findings-emnlp.357
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2021
Month:
November
Year:
2021
Address:
Punta Cana, Dominican Republic
Editors:
Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
Venue:
Findings
SIG:
SIGDAT
Publisher:
Association for Computational Linguistics
Note:
Pages:
4227–4233
Language:
URL:
https://aclanthology.org/2021.findings-emnlp.357
DOI:
10.18653/v1/2021.findings-emnlp.357
Bibkey:
Cite (ACL):
Ye Lin, Yanyang Li, Tong Xiao, and Jingbo Zhu. 2021. Bag of Tricks for Optimizing Transformer Efficiency. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 4227–4233, Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cite (Informal):
Bag of Tricks for Optimizing Transformer Efficiency (Lin et al., Findings 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl-24-ws-corrections/2021.findings-emnlp.357.pdf
Video:
 https://preview.aclanthology.org/naacl-24-ws-corrections/2021.findings-emnlp.357.mp4
Code
 lollipop321/mini-decoder-network