GCG-Based Artificial Languages for Evaluating Inductive Biases of Neural Language Models

Nadine El-Naggar, Tatsuki Kuribayashi, Ted Briscoe


Abstract
Recent work has investigated whether extant neural language models (LMs) have an inbuilt inductive bias towards the acquisition of attested typologically-frequent grammatical patterns as opposed to infrequent, unattested, or impossible patterns using artificial languages (White and Cotterell, 2021; Kuribayashi et al., 2024). The use of artificial languages facilitates isolation of specific grammatical properties from other factors such as lexical or real-world knowledge, but also risks oversimplification of the problem.In this paper, we examine the use of Generalized Categorial Grammars (GCGs) (Wood, 2014) as a general framework to create artificial languages with a wider range of attested word order patterns, including those where the subject intervenes between verb and object (VSO, OSV) and unbounded dependencies in object relative clauses. In our experiments, we exemplify our approach by extending White and Cotterell (2021) and report some significant differences from existing results.
Anthology ID:
2025.conll-1.35
Volume:
Proceedings of the 29th Conference on Computational Natural Language Learning
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Gemma Boleda, Michael Roth
Venues:
CoNLL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
540–556
Language:
URL:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.35/
DOI:
Bibkey:
Cite (ACL):
Nadine El-Naggar, Tatsuki Kuribayashi, and Ted Briscoe. 2025. GCG-Based Artificial Languages for Evaluating Inductive Biases of Neural Language Models. In Proceedings of the 29th Conference on Computational Natural Language Learning, pages 540–556, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
GCG-Based Artificial Languages for Evaluating Inductive Biases of Neural Language Models (El-Naggar et al., CoNLL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.35.pdf