Advancing Regular Language Reasoning in Linear Recurrent Neural Networks

Ting-Han Fan, Ta-Chung Chi, Alexander Rudnicky


Abstract
In recent studies, linear recurrent neural networks (LRNNs) have achieved Transformer-level performance in natural language and long-range modeling, while offering rapid parallel training and constant inference cost. With the resurgence of interest in LRNNs, we study whether they can learn the hidden rules in training sequences, such as the grammatical structures of regular language. We theoretically analyze some existing LRNNs and discover their limitations in modeling regular language. Motivated by this analysis, we propose a new LRNN equipped with a block-diagonal and input-dependent transition matrix. Experiments suggest that the proposed model is the only LRNN capable of performing length extrapolation on regular language tasks such as Sum, Even Pair, and Modular Arithmetic. The code is released at https://github.com/tinghanf/RegluarLRNN.
Anthology ID:
2024.naacl-short.4
Volume:
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
45–53
Language:
URL:
https://aclanthology.org/2024.naacl-short.4
DOI:
Bibkey:
Cite (ACL):
Ting-Han Fan, Ta-Chung Chi, and Alexander Rudnicky. 2024. Advancing Regular Language Reasoning in Linear Recurrent Neural Networks. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers), pages 45–53, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks (Fan et al., NAACL 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-checklist/2024.naacl-short.4.pdf