@inproceedings{zhou-salhan-2025-extended,
    title = "Extended Abstract for ``Linguistic Universals'': Emergent Shared Features in Independent Monolingual Language Models via Sparse Autoencoders",
    author = "Zhou, Ej  and
      Salhan, Suchir",
    editor = "Adelani, David Ifeoluwa  and
      Arnett, Catherine  and
      Ataman, Duygu  and
      Chang, Tyler A.  and
      Gonen, Hila  and
      Raja, Rahul  and
      Schmidt, Fabian  and
      Stap, David  and
      Wang, Jiayi",
    booktitle = "Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)",
    month = nov,
    year = "2025",
    address = "Suzhuo, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.9/",
    pages = "128--130",
    ISBN = "979-8-89176-345-6",
    abstract = "Do independently trained monolingual language models converge on shared linguistic principles? To explore this question, we propose to analyze a suite of models trained separately on single languages but with identical architectures and budgets. We train sparse autoencoders (SAEs) on model activations to obtain interpretable latent features, then align them across languages using activation correlations. We do pairwise analyses to see if feature spaces show non-trivial convergence, and we identify universal features that consistently emerge across diverse models. Positive results will provide evidence that certain high-level regularities in language are rediscovered independently in machine learning systems."
}Markdown (Informal)
[Extended Abstract for “Linguistic Universals”: Emergent Shared Features in Independent Monolingual Language Models via Sparse Autoencoders](https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.9/) (Zhou & Salhan, MRL 2025)
ACL