Reservoir Transformers

Sheng Shen, Alexei Baevski, Ari Morcos, Kurt Keutzer, Michael Auli, Douwe Kiela


Abstract
We demonstrate that transformers obtain impressive performance even when some of the layers are randomly initialized and never updated. Inspired by old and well-established ideas in machine learning, we explore a variety of non-linear “reservoir” layers interspersed with regular transformer layers, and show improvements in wall-clock compute time until convergence, as well as overall performance, on various machine translation and (masked) language modelling tasks.
Anthology ID:
2021.acl-long.331
Volume:
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers)
Month:
August
Year:
2021
Address:
Online
Editors:
Chengqing Zong, Fei Xia, Wenjie Li, Roberto Navigli
Venues:
ACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4294–4309
Language:
URL:
https://aclanthology.org/2021.acl-long.331
DOI:
10.18653/v1/2021.acl-long.331
Bibkey:
Cite (ACL):
Sheng Shen, Alexei Baevski, Ari Morcos, Kurt Keutzer, Michael Auli, and Douwe Kiela. 2021. Reservoir Transformers. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 4294–4309, Online. Association for Computational Linguistics.
Cite (Informal):
Reservoir Transformers (Shen et al., ACL-IJCNLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-acl-2023-videos/2021.acl-long.331.pdf
Video:
 https://preview.aclanthology.org/ingest-acl-2023-videos/2021.acl-long.331.mp4
Data
MultiNLISSTSST-2