Systematic Generalization in Language Models Scales with Information Entropy

Sondre Wold, Lucas Georges Gabriel Charpentier, Étienne Simon


Abstract
Systematic generalization remains challenging for current language models, which are known to be both sensitive to semantically similar permutations of the input and to struggle with known concepts presented in novel contexts. Although benchmarks exist for assessing compositional behavior, it is unclear how to measure the difficulty of a systematic generalization problem. In this work, we show how one aspect of systematic generalization can be described by the entropy of the distribution of component parts in the training data. We formalize a framework for measuring entropy in a sequence-to-sequence task and find that the performance of popular model architectures scales with the entropy. Our work connects systematic generalization to information efficiency, and our results indicate that success at high entropy can be achieved even without built-in priors, and that success at low entropy can serve as a target for assessing progress towards robust systematic generalization.
Anthology ID:
2025.findings-acl.90
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1807–1819
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.90/
DOI:
Bibkey:
Cite (ACL):
Sondre Wold, Lucas Georges Gabriel Charpentier, and Étienne Simon. 2025. Systematic Generalization in Language Models Scales with Information Entropy. In Findings of the Association for Computational Linguistics: ACL 2025, pages 1807–1819, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Systematic Generalization in Language Models Scales with Information Entropy (Wold et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.90.pdf