Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models

Tyler Chang, Yifan Xu, Weijian Xu, Zhuowen Tu


Abstract
In this paper, we detail the relationship between convolutions and self-attention in natural language tasks. We show that relative position embeddings in self-attention layers are equivalent to recently-proposed dynamic lightweight convolutions, and we consider multiple new ways of integrating convolutions into Transformer self-attention. Specifically, we propose composite attention, which unites previous relative position encoding methods under a convolutional framework. We conduct experiments by training BERT with composite attention, finding that convolutions consistently improve performance on multiple downstream tasks, replacing absolute position embeddings. To inform future work, we present results comparing lightweight convolutions, dynamic convolutions, and depthwise-separable convolutions in language model pre-training, considering multiple injection points for convolutions in self-attention layers.
Anthology ID:
2021.acl-long.333
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4322–4333
Language:
URL:
https://aclanthology.org/2021.acl-long.333
DOI:
10.18653/v1/2021.acl-long.333
Bibkey:
Cite (ACL):
Tyler Chang, Yifan Xu, Weijian Xu, and Zhuowen Tu. 2021. Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4322–4333, Online. Association for Computational Linguistics.
Cite (Informal):
Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models (Chang et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2021.acl-long.333.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2021.acl-long.333.mp4
Code
 mlpc-ucsd/BERT_Convolutions
Data
CoLAGLUEQNLI