Quantifying Attention Flow in Transformers

Samira Abnar, Willem Zuidema


Abstract
In the Transformer model, “self-attention” combines information from attended embeddings into the representation of the focal embedding in the next layer. Thus, across layers of the Transformer, information originating from different tokens gets increasingly mixed. This makes attention weights unreliable as explanations probes. In this paper, we consider the problem of quantifying this flow of information through self-attention. We propose two methods for approximating the attention to input tokens given attention weights, attention rollout and attention flow, as post hoc methods when we use attention weights as the relative relevance of the input tokens. We show that these methods give complementary views on the flow of information, and compared to raw attention, both yield higher correlations with importance scores of input tokens obtained using an ablation method and input gradients.
Anthology ID:
2020.acl-main.385
Volume:
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2020
Address:
Online
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4190–4197
Language:
URL:
https://aclanthology.org/2020.acl-main.385
DOI:
10.18653/v1/2020.acl-main.385
Bibkey:
Cite (ACL):
Samira Abnar and Willem Zuidema. 2020. Quantifying Attention Flow in Transformers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4190–4197, Online. Association for Computational Linguistics.
Cite (Informal):
Quantifying Attention Flow in Transformers (Abnar & Zuidema, ACL 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2020.acl-main.385.pdf
Video:
 http://slideslive.com/38928943
Code
 samiraabnar/attention_flow +  additional community code
Data
GLUESST