Jungwoo Lee
2025
Mitigating Attention Localization in Small Scale: Self-Attention Refinement via One-step Belief Propagation
Nakyung Lee
|
Yeongoon Kim
|
Minhae Oh
|
Suhwan Kim
|
Jin Woo Koo
|
Hyewon Jo
|
Jungwoo Lee
Findings of the Association for Computational Linguistics: EMNLP 2025
Transformer-based self-attention mechanism serves as the core of modern language models, yet it often suffers from *localization*, where attentions collapse onto a limited subset of tokens and fail to capture long-range dependencies. To address this issue, we propose **Self-Attention One-step Belief Propagation (SAOBP)**, a refinement framework that injects *multi-hop* relationships through a belief propagation process. To interpret and quantify these interactions, we introduce **Global Token Dependency (GTD)** that captures the relative contribution of multi-hop connections within the attention graph. Empirical results indicate that SAOBP helps prevent entropy collapse in deeper layers and adaptively maintains GTD at task-appropriate levels, thereby supporting improvements in model performance. Importantly, we observe competitive gains in small-scale models, highlighting its potential for improving inference quality in resource-constrained scenarios.
Search
Fix author
Co-authors
- Hyewon Jo 1
- Yeongoon Kim 1
- Suhwan Kim 1
- Jin Woo Koo 1
- Nakyung Lee 1
- show all...