Jiale Wang
2024
PartialFormer: Modeling Part Instead of Whole for Machine Translation
Tong Zheng
|
Bei Li
|
Huiwen Bao
|
Jiale Wang
|
Weiqiao Shan
|
Tong Xiao
|
JingBo Zhu
Findings of the Association for Computational Linguistics ACL 2024
The design choices in Transformer feed-forward neural networks have resulted in significant computational and parameter overhead. In this work, we emphasize the importance of hidden dimensions in designing lightweight FFNs, a factor often overlooked in previous architectures. Guided by this principle, we introduce PartialFormer, a parameter-efficient Transformer architecture utilizing multiple smaller FFNs to reduce parameters and computation while maintaining essential hidden dimensions. These smaller FFNs are integrated into a multi-head attention mechanism for effective collaboration. We also propose a tailored head scaling strategy to enhance PartialFormer’s capabilities. Furthermore, we present a residual-like attention calculation to improve depth scaling within PartialFormer. Extensive experiments on 9 translation tasks and 1 abstractive summarization task validate the effectiveness of our PartialFormer approach on machine translation and summarization tasks. Our code would be available at: https://github.com/zhengkid/PartialFormer.
Search
Co-authors
- Tong Zheng 1
- Bei Li 1
- Huiwen Bao 1
- Weiqiao Shan 1
- Tong Xiao 1
- show all...