Emergent Abilities in Reduced-Scale Generative Language Models

Sherin Muckatira, Vijeta Deshpande, Vladislav Lialin, Anna Rumshisky


Abstract
Large language models can solve new tasks without task-specific fine-tuning. This ability, also known as in-context learning (ICL), is considered an emergent ability and is primarily seen in large language models with billions of parameters. This study investigates if such emergent properties are strictly tied to model size or can be demonstrated by smaller models trained on reduced-scale data. To explore this, we simplify pre-training data and pre-train 36 causal language models with parameters varying from 1 million to 165 million parameters. We show that models trained on this simplified pre-training data demonstrate enhanced zero-shot capabilities across various tasks in simplified language, achieving performance comparable to that of pre-trained models six times larger on unrestricted language. This suggests that downscaling the language allows zero-shot learning capabilities to emerge in models with limited size.Additionally, we find that these smaller models pre-trained on simplified data demonstrate a power law relationship between the evaluation loss and the three scaling factors: compute, dataset size, and model size.
Anthology ID:
2024.findings-naacl.79
Volume:
Findings of the Association for Computational Linguistics: NAACL 2024
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Kevin Duh, Helena Gomez, Steven Bethard
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1242–1257
Language:
URL:
https://aclanthology.org/2024.findings-naacl.79
DOI:
Bibkey:
Cite (ACL):
Sherin Muckatira, Vijeta Deshpande, Vladislav Lialin, and Anna Rumshisky. 2024. Emergent Abilities in Reduced-Scale Generative Language Models. In Findings of the Association for Computational Linguistics: NAACL 2024, pages 1242–1257, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Emergent Abilities in Reduced-Scale Generative Language Models (Muckatira et al., Findings 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2024.findings-naacl.79.pdf
Copyright:
 2024.findings-naacl.79.copyright.pdf