Yihang Gao
2025
DAPE V2: Process Attention Score as Feature Map for Length Extrapolation
Chuanyang Zheng
|
Yihang Gao
|
Han Shi
|
Jing Xiong
|
Jiankai Sun
|
Jingyao Li
|
Minbin Huang
|
Xiaozhe Ren
|
Michael Ng
|
Xin Jiang
|
Zhenguo Li
|
Yu Li
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The attention mechanism is a fundamental component of the Transformer model, contributing to interactions among distinct tokens. In general, the attention scores are determined simply by the key-query products. However, this work’s occasional trial (combining DAPE and NoPE) of including additional MLPs on attention scores without position encoding indicates that the classical key-query multiplication may limit the performance of Transformers. In this work, we conceptualize attention as a feature map and apply the convolution operator (for neighboring attention scores across different heads) to mimic the processing methods in computer vision. Specifically, **the main contribution of this paper is identifying and interpreting the Transformer length extrapolation problem as a result of the limited expressiveness of the naive query and key dot product, and we successfully translate the length extrapolation issue into a well-understood feature map processing problem**, which is called Convolutional Data-Adaptive Position Encoding (CDAPE).The novel insight, which can be adapted to various attention-related models, reveals that the current Transformer architecture has the potential for further evolution. Extensive experiments demonstrate that treating attention as a feature map and applying convolution as a processing method significantly enhances Transformer performance.
Self-Adjust Softmax
Chuanyang Zheng
|
Yihang Gao
|
Guoxuan Chen
|
Han Shi
|
Jing Xiong
|
Xiaozhe Ren
|
Chao Huang
|
Zhenguo Li
|
Yu Li
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
The softmax function is crucial in Transformer attention, which normalizes each row of the attention scores with summation to one. **Usually, tokens with larger attention scores are important for the final prediction.However, the softmax function can face a gradient vanishing issue for such important tokens (e.g., probabilities close to one), leading to optimization difficulties for the important tokens so that the performance may not be better.**In this paper, we propose Self-Adjust Softmax (SA-Softmax) to address this issue by modifying softmax(z) to z ⋅ softmax(z) and its normalized variant (z - min(z\min,0))⁄max(0,zmax)-min(zmin,0) ⋅ softmax(z).We theoretically show that SA-Softmax provides enhanced gradient properties compared to the vanilla softmax function.Moreover, Attention can be seamlessly integrated into existing Transformer models to their attention mechanisms with minor adjustments.We conducted experiments to evaluate the empirical performance of Transformer models using compared to the vanilla softmax function. These experiments, involving models with up to 2.7 billion parameters, are conducted across diverse datasets, language tasks, and positional encoding methods.
Search
Fix author
Co-authors
- Zhenguo Li 2
- Yu Li (李豫, 李宇) 2
- Xiaozhe Ren 2
- Han Shi 2
- Jing Xiong 2
- show all...