Xiuqing Lv
2023
LightFormer: Light-weight Transformer Using SVD-based Weight Transfer and Parameter Sharing
Xiuqing Lv
|
Peng Zhang
|
Sunzhu Li
|
Guobing Gan
|
Yueheng Sun
Findings of the Association for Computational Linguistics: ACL 2023
Transformer has become an important technique for natural language processing tasks with great success. However, it usually requires huge storage space and computational cost, making it difficult to be deployed on resource-constrained edge devices. To compress and accelerate Transformer, we propose LightFormer, which adopts a low-rank factorization initialized by SVD-based weight transfer and parameter sharing. The SVD-based weight transfer can effectively utilize the well-trained Transformer parameter knowledge to speed up the model convergence, and effectively alleviate the low-rank bottleneck problem combined with parameter sharing. We validate our method on machine translation, text summarization and text classification tasks. Experiments show that on IWSLT’14 De-En and WMT’14 En-De, LightFormer achieves similar performance to the baseline Transformer with 3.8 times and 1.8 times fewer parameters, and achieves 2.3 times speedup and 1.5 times speedup respectively, generally outperforming recent light-weight Transformers.
2022
Hypoformer: Hybrid Decomposition Transformer for Edge-friendly Neural Machine Translation
Sunzhu Li
|
Peng Zhang
|
Guobing Gan
|
Xiuqing Lv
|
Benyou Wang
|
Junqiu Wei
|
Xin Jiang
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Transformer has been demonstrated effective in Neural Machine Translation (NMT). However, it is memory-consuming and time-consuming in edge devices, resulting in some difficulties for real-time feedback. To compress and accelerate Transformer, we propose a Hybrid Tensor-Train (HTT) decomposition, which retains full rank and meanwhile reduces operations and parameters. A Transformer using HTT, named Hypoformer, consistently and notably outperforms the recent light-weight SOTA methods on three standard translation tasks under different parameter and speed scales. In extreme low resource scenarios, Hypoformer has 7.1 points absolute improvement in BLEU and 1.27 X speedup than vanilla Transformer on IWSLT’14 De-En task.
Search
Co-authors
- Peng Zhang 2
- Sunzhu Li 2
- Guobing Gan 2
- Yueheng Sun 1
- Benyou Wang 1
- show all...