How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers

Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, Roy Schwartz


Abstract
The attention mechanism is considered the backbone of the widely-used Transformer architecture. It contextualizes the input by computing input-specific attention matrices. We find that this mechanism, while powerful and elegant, is not as important as typically thought for pretrained language models. We introduce PAPA, a new probing method that replaces the input-dependent attention matrices with constant ones—the average attention weights over multiple inputs. We use PAPA to analyze several established pretrained Transformers on six downstream tasks. We find that without any input-dependent attention, all models achieve competitive performance—an average relative drop of only 8% from the probing baseline. Further, little or no performance drop is observed when replacing half of the input-dependent attention matrices with constant (input-independent) ones. Interestingly, we show that better-performing models lose more from applying our method than weaker models, suggesting that the utilization of the input-dependent attention mechanism might be a factor in their success. Our results motivate research on simpler alternatives to input-dependent attention, as well as on methods for better utilization of this mechanism in the Transformer architecture.
Anthology ID:
2022.findings-emnlp.101
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1403–1416
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.101
DOI:
10.18653/v1/2022.findings-emnlp.101
Bibkey:
Cite (ACL):
Michael Hassid, Hao Peng, Daniel Rotem, Jungo Kasai, Ivan Montero, Noah A. Smith, and Roy Schwartz. 2022. How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 1403–1416, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
How Much Does Attention Actually Attend? Questioning the Importance of Attention in Pretrained Transformers (Hassid et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.findings-emnlp.101.pdf
Video:
 https://preview.aclanthology.org/naacl24-info/2022.findings-emnlp.101.mp4