Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change

Hongfei Xu, Josef van Genabith, Deyi Xiong, Qiuhui Liu


Abstract
The choice of hyper-parameters affects the performance of neural models. While much previous research (Sutskever et al., 2013; Duchi et al., 2011; Kingma and Ba, 2015) focuses on accelerating convergence and reducing the effects of the learning rate, comparatively few papers concentrate on the effect of batch size. In this paper, we analyze how increasing batch size affects gradient direction, and propose to evaluate the stability of gradients with their angle change. Based on our observations, the angle change of gradient direction first tends to stabilize (i.e. gradually decrease) while accumulating mini-batches, and then starts to fluctuate. We propose to automatically and dynamically determine batch sizes by accumulating gradients of mini-batches and performing an optimization step at just the time when the direction of gradients starts to fluctuate. To improve the efficiency of our approach for large models, we propose a sampling approach to select gradients of parameters sensitive to the batch size. Our approach dynamically determines proper and efficient batch sizes during training. In our experiments on the WMT 14 English to German and English to French tasks, our approach improves the Transformer with a fixed 25k batch size by +0.73 and +0.82 BLEU respectively.
Anthology ID:
2020.acl-main.323
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Editors:
Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3519–3524
Language:
URL:
https://aclanthology.org/2020.acl-main.323
DOI:
10.18653/v1/2020.acl-main.323
Bibkey:
Cite (ACL):
Hongfei Xu, Josef van Genabith, Deyi Xiong, and Qiuhui Liu. 2020. Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 3519–3524, Online. Association for Computational Linguistics.
Cite (Informal):
Dynamically Adjusting Transformer Batch Size by Monitoring Gradient Direction Change (Xu et al., ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2020.acl-main.323.pdf
Video:
 http://slideslive.com/38929013