Goldfish: Monolingual Language Models for 350 Languages
Tyler A. Chang, Catherine Arnett, Zhuowen Tu, Benjamin Bergen
Abstract
For many low-resource languages, the only available language models are large multilingual models trained on many languages simultaneously. Despite state-of-the-art performance on reasoning tasks, we find that these models still struggle with basic grammatical text generation in many languages. First, large multilingual models perform worse than bigrams for many languages (e.g. 24% of languages in XGLM 4.5B; 43% in BLOOM 7.1B) using FLORES perplexity as an evaluation metric. Second, when we train small monolingual models with only 125M parameters on 1GB or less data for 350 languages, these small models outperform large multilingual models both in perplexity and on a massively multilingual grammaticality benchmark. To facilitate future work on low-resource language modeling, we release Goldfish, a suite of over 1,000 small monolingual language models trained comparably for 350 languages. These models represent the first publicly-available monolingual language models for 215 of the languages included.- Anthology ID:
- 2026.lrec-main.300
- Volume:
- Proceedings of the Fifteenth Language Resources and Evaluation Conference
- Month:
- May
- Year:
- 2026
- Address:
- Palma de Mallorca, Spain
- Editors:
- Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
- Venue:
- LREC
- SIG:
- Publisher:
- ELRA Language Resource Association
- Note:
- Pages:
- 3750–3781
- Language:
- URL:
- https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.300/
- DOI:
- Cite (ACL):
- Tyler A. Chang, Catherine Arnett, Zhuowen Tu, and Benjamin Bergen. 2026. Goldfish: Monolingual Language Models for 350 Languages. International Conference on Language Resources and Evaluation, main:3750–3781.
- Cite (Informal):
- Goldfish: Monolingual Language Models for 350 Languages (Chang et al., LREC 2026)
- PDF:
- https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.300.pdf