Abstract
Large Language Models (LLMs) are increasingly deployed in user-facing applications worldwide, necessitating handling multiple languages across various tasks. We propose a metric called Information Parity (IP) that can predict an LLM’s capabilities across multiple languages in a task-agnostic manner. IP is well-motivated from an information theoretic perspective: it is associated with the LLM’s efficiency of compressing the text in a given language compared to a reference language. We evaluate IP and other popular metrics such as Tokenization Parity (TP) and Tokenizer Fertility (TF) on several variants of open-sourced LLMs (Llama2, Gemma, Mistral). Among all metrics known to us, IP is better correlated with existing task-specific benchmark scores from the literature and thus better predicts such scores in a certain language. These findings show that IP may be useful for ranking multilingual LLMs’ capabilities regardless of the downstream task.- Anthology ID:
- 2024.findings-emnlp.468
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7971–7989
- Language:
- URL:
- https://aclanthology.org/2024.findings-emnlp.468
- DOI:
- 10.18653/v1/2024.findings-emnlp.468
- Cite (ACL):
- Alexander Tsvetkov and Alon Kipnis. 2024. Information Parity: Measuring and Predicting the Multilingual Capabilities of Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 7971–7989, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Information Parity: Measuring and Predicting the Multilingual Capabilities of Language Models (Tsvetkov & Kipnis, Findings 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.findings-emnlp.468.pdf