Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation

Raymond Li, Wen Xiao, Linzi Xing, Lanjun Wang, Gabriel Murray, Giuseppe Carenini


Abstract
The multi-head self-attention mechanism of the transformer model has been thoroughly investigated recently. In one vein of study, researchers are interested in understanding why and how transformers work. In another vein, researchers propose new attention augmentation methods to make transformers more accurate, efficient and interpretable. In this paper, we combine these two lines of research in a human-in-the-loop pipeline to first discover important task-specific attention patterns. Then those patterns are injected, not only to smaller models, but also to the original model. The benefits of our pipeline and discovered patterns are demonstrated in two case studies with extractive summarization and topic segmentation. After discovering interpretable patterns in BERT-based models fine-tuned for the two downstream tasks, experiments indicate that when we inject the patterns into attention heads, the models show considerable improvements in accuracy and efficiency.
Anthology ID:
2022.emnlp-main.694
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10189–10204
Language:
URL:
https://aclanthology.org/2022.emnlp-main.694
DOI:
10.18653/v1/2022.emnlp-main.694
Bibkey:
Cite (ACL):
Raymond Li, Wen Xiao, Linzi Xing, Lanjun Wang, Gabriel Murray, and Giuseppe Carenini. 2022. Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 10189–10204, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Human Guided Exploitation of Interpretable Attention Patterns in Summarization and Topic Segmentation (Li et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.694.pdf