Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible

Imry Ziv, Nur Lan, Emmanuel Chemla


Abstract
Are large language models (LLMs) sensitive to the distinction between humanly possible and impossible languages? This question was recently used in a broader debate on whether LLMs and humans share the same innate learning biases. Previous work has answered it in the positive by comparing LLM learning curves on existing language datasets and on "impossible" datasets derived from them via various perturbation functions. Using the same methodology, we examine this claim on a wider set of languages and impossible perturbations. We find that in most cases, GPT-2 learns each language and its impossible counterpart equally easily, in contrast to previous findings. We also apply a more lenient condition by testing whether GPT-2 provides any kind of separation between the whole set of natural languages and the whole set of impossible languages, based on cross-linguistic variance in metrics derived from the learning curves. Taken together, these perspectives show that GPT-2 provides no systematic separation between the possible and the impossible.
Anthology ID:
2026.eacl-long.249
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5393–5403
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.249/
DOI:
Bibkey:
Cite (ACL):
Imry Ziv, Nur Lan, and Emmanuel Chemla. 2026. Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 5393–5403, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
Biasless Language Models Learn Unnaturally: How LLMs Fail to Distinguish the Possible from the Impossible (Ziv et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.249.pdf