Jie Lyu
2020
Repulsive Attention: Rethinking Multi-head Attention as Bayesian Inference
Bang An
|
Jie Lyu
|
Zhenyi Wang
|
Chunyuan Li
|
Changwei Hu
|
Fei Tan
|
Ruiyi Zhang
|
Yifan Hu
|
Changyou Chen
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
The neural attention mechanism plays an important role in many natural language processing applications. In particular, multi-head attention extends single-head attention by allowing a model to jointly attend information from different perspectives. However, without explicit constraining, multi-head attention may suffer from attention collapse, an issue that makes different heads extract similar attentive features, thus limiting the model’s representation power. In this paper, for the first time, we provide a novel understanding of multi-head attention from a Bayesian perspective. Based on the recently developed particle-optimization sampling techniques, we propose a non-parametric approach that explicitly improves the repulsiveness in multi-head attention and consequently strengthens model’s expressiveness. Remarkably, our Bayesian interpretation provides theoretical inspirations on the not-well-understood questions: why and how one uses multi-head attention. Extensive experiments on various attention models and applications demonstrate that the proposed repulsive attention can improve the learned feature diversity, leading to more informative representations with consistent performance improvement on multiple tasks.
Calibrated Language Model Fine-Tuning for In- and Out-of-Distribution Data
Lingkai Kong
|
Haoming Jiang
|
Yuchen Zhuang
|
Jie Lyu
|
Tuo Zhao
|
Chao Zhang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Fine-tuned pre-trained language models can suffer from severe miscalibration for both in-distribution and out-of-distribution (OOD) data due to over-parameterization. To mitigate this issue, we propose a regularized fine-tuning method. Our method introduces two types of regularization for better calibration: (1) On-manifold regularization, which generates pseudo on-manifold samples through interpolation within the data manifold. Augmented training with these pseudo samples imposes a smoothness regularization to improve in-distribution calibration. (2) Off-manifold regularization, which encourages the model to output uniform distributions for pseudo off-manifold samples to address the over-confidence issue for OOD data. Our experiments demonstrate that the proposed method outperforms existing calibration methods for text classification in terms of expectation calibration error, misclassification detection, and OOD detection on six datasets. Our code can be found at https://github.com/Lingkai-Kong/Calibrated-BERT-Fine-Tuning.
Search
Co-authors
- Bang An 1
- Zhenyi Wang 1
- Chunyuan Li 1
- Changwei Hu 1
- Fei Tan 1
- show all...