Abstract
The Transformer architecture is crucial for numerous AI models, but it still faces challenges in long-range language modeling. Though several specific transformer architectures have been designed to tackle issues of long-range dependencies, existing methods like Transformer-XL are plagued by a high percentage of ineffective memories. In this study, we present a plug-and-play strategy, known as TRAining-free Memory Selection (TRAMS), that selects tokens participating in attention calculation based on one simple metric. This strategy allows us to keep tokens that are likely to have a high attention score with the current queries and ignore the other ones. We have tested our approach on the word-level benchmark (WikiText-103) and the character-level benchmark (enwik8), and the results indicate an improvement without having additional training or adding additional parameters.- Anthology ID:
- 2023.findings-emnlp.331
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4966–4972
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.331
- DOI:
- 10.18653/v1/2023.findings-emnlp.331
- Cite (ACL):
- Haofei Yu, Cunxiang Wang, Yue Zhang, and Wei Bi. 2023. TRAMS: Training-free Memory Selection for Long-range Language Modeling. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4966–4972, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- TRAMS: Training-free Memory Selection for Long-range Language Modeling (Yu et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.findings-emnlp.331.pdf