What if I ask in alia lingua? Measuring Functional Similarity Across Languages

Debangan Mishra, Arihant Rastogi, Agyeya Singh Negi, Shashwat Goel, Ponnurangam Kumaraguru


Abstract
How similar are model outputs across languages? In this work, we study this question using a recently proposed model similarity metric—𝜅p—applied to 20 languages and 47 subjects in GlobalMMLU. Our analysis reveals that a model’s responses become increasingly consistent across languages as its size and capability grow. Interestingly, models exhibit greater cross-lingual consistency within themselves than agreement with other models prompted in the same language. These results highlight not only the value of 𝜅p as a practical tool for evaluating multilingual reliability, but also its potential to guide the development of more consistent multilingual systems.
Anthology ID:
2025.mrl-main.33
Volume:
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)
Month:
November
Year:
2025
Address:
Suzhuo, China
Editors:
David Ifeoluwa Adelani, Catherine Arnett, Duygu Ataman, Tyler A. Chang, Hila Gonen, Rahul Raja, Fabian Schmidt, David Stap, Jiayi Wang
Venues:
MRL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
496–506
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.33/
DOI:
Bibkey:
Cite (ACL):
Debangan Mishra, Arihant Rastogi, Agyeya Singh Negi, Shashwat Goel, and Ponnurangam Kumaraguru. 2025. What if I ask in alia lingua? Measuring Functional Similarity Across Languages. In Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025), pages 496–506, Suzhuo, China. Association for Computational Linguistics.
Cite (Informal):
What if I ask in alia lingua? Measuring Functional Similarity Across Languages (Mishra et al., MRL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.mrl-main.33.pdf