Linearizing Transformer with Key-Value Memory

Yizhe Zhang, Deng Cai


Abstract
Efficient transformer variants with linear time complexity have been developed to mitigate the quadratic computational overhead of the vanilla transformer. Among them are low-rank projection methods such as Linformer and kernel-based Transformers. Despite their unique merits, they usually suffer from a performance drop comparing with the vanilla transformer on many sequence generation tasks, and often fail to obtain computation gain when the generation is short. We propose Memsizer, an approach towards closing the performance gap while improving the efficiency even with short generation. It projects the source sequences into lower dimension representations like Linformer, while enjoying efficient recurrent-style incremental computation similar to kernel-based transformers. This yields linear computation time and constant memory complexity at inference time. Memsizer also employs a lightweight multi-head mechanism which renders the computation as light as a single-head model. We demonstrate that Memsizer provides an improved balance between efficiency and accuracy over the vanilla transformer and other efficient transformer variants in three typical sequence generation tasks, including machine translation, abstractive text summarization, and language modeling.
Anthology ID:
2022.emnlp-main.24
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
346–359
Language:
URL:
https://aclanthology.org/2022.emnlp-main.24
DOI:
10.18653/v1/2022.emnlp-main.24
Bibkey:
Cite (ACL):
Yizhe Zhang and Deng Cai. 2022. Linearizing Transformer with Key-Value Memory. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 346–359, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Linearizing Transformer with Key-Value Memory (Zhang & Cai, EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2022.emnlp-main.24.pdf