Bringing Emerging Architectures to Sequence Labeling in NLP

Ana Ezquerro, Carlos Gómez-Rodríguez, David Vilares


Abstract
Pretrained Transformer encoders are the dominant approach to sequence labeling. While some alternative architectures-such as xLSTMs, structured state-space models, diffusion models, and adversarial learning-have shown promise in language modeling, few have been applied to sequence labeling, and mostly on flat or simplified tasks. We study how these architectures adapt across tagging tasks that vary in structural complexity, label space, and token dependencies, with evaluation spanning multiple languages. We find that the strong performance previously observed in simpler settings does not always generalize well across languages or datasets, nor does it extend to more complex structured tasks.
Anthology ID:
2026.eacl-long.227
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4886–4909
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.227/
DOI:
Bibkey:
Cite (ACL):
Ana Ezquerro, Carlos Gómez-Rodríguez, and David Vilares. 2026. Bringing Emerging Architectures to Sequence Labeling in NLP. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4886–4909, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Bringing Emerging Architectures to Sequence Labeling in NLP (Ezquerro et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.227.pdf