Ting-Han Fan
2024
Advancing Regular Language Reasoning in Linear Recurrent Neural Networks
Ting-Han Fan
|
Ta-Chung Chi
|
Alexander Rudnicky
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Attention Alignment and Flexible Positional Embeddings Improve Transformer Length Extrapolation
Ta-Chung Chi
|
Ting-Han Fan
|
Alexander Rudnicky
Findings of the Association for Computational Linguistics: NAACL 2024
2023
Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation
Ta-Chung Chi
|
Ting-Han Fan
|
Alexander Rudnicky
|
Peter Ramadge
Findings of the Association for Computational Linguistics: EMNLP 2023
Dissecting Transformer Length Extrapolation via the Lens of Receptive Field Analysis
Ta-Chung Chi
|
Ting-Han Fan
|
Alexander Rudnicky
|
Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Latent Positional Information is in the Self-Attention Variance of Transformer Language Models Without Positional Embeddings
Ta-Chung Chi
|
Ting-Han Fan
|
Li-Wei Chen
|
Alexander Rudnicky
|
Peter Ramadge
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)