A Lightweight Recurrent Network for Sequence Modeling

Biao Zhang, Rico Sennrich


Abstract
Recurrent networks have achieved great success on various sequential tasks with the assistance of complex recurrent units, but suffer from severe computational inefficiency due to weak parallelization. One direction to alleviate this issue is to shift heavy computations outside the recurrence. In this paper, we propose a lightweight recurrent network, or LRN. LRN uses input and forget gates to handle long-range dependencies as well as gradient vanishing and explosion, with all parameter related calculations factored outside the recurrence. The recurrence in LRN only manipulates the weight assigned to each token, tightly connecting LRN with self-attention networks. We apply LRN as a drop-in replacement of existing recurrent units in several neural sequential models. Extensive experiments on six NLP tasks show that LRN yields the best running efficiency with little or no loss in model performance.
Anthology ID:
P19-1149
Volume:
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics
Month:
July
Year:
2019
Address:
Florence, Italy
Editors:
Anna Korhonen, David Traum, Lluís Màrquez
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1538–1548
Language:
URL:
https://aclanthology.org/P19-1149
DOI:
10.18653/v1/P19-1149
Bibkey:
Cite (ACL):
Biao Zhang and Rico Sennrich. 2019. A Lightweight Recurrent Network for Sequence Modeling. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 1538–1548, Florence, Italy. Association for Computational Linguistics.
Cite (Informal):
A Lightweight Recurrent Network for Sequence Modeling (Zhang & Sennrich, ACL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/P19-1149.pdf
Code
 bzhangGo/lrn
Data
CoNLL 2003SNLISQuADWMT 2014