Huteng Dai


2025

pdf bib
Mind the Gap: How BabyLMs Learn Filler-Gap Dependencies
Chi-Yun Chang | Xueyang Huang | Humaira Nasir | Shane Storks | Olawale Akingbade | Huteng Dai
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

Humans acquire syntactic constructions like filler-gap dependencies from limited and often noisy input. Can neural language models do the same? We investigate this question by evaluating GPT-2 models trained on child-oriented input from the BabyLM Challenge. Our experiments focus on whether these “baby” language models acquire filler-gap dependencies, generalize across constructions, and respect structural constraints such as island effects. We apply a suite of syntactic constructions to four models trained on child language, including two base models (trained on 10M and 100M tokens) and two well-performing models from the BabyLM Challenge (ConcreteGPT and BabbleGPT). We evaluate model behavior using wh-licensing scores, flip tests, and grammaticality contrasts across four constructions. Results show that BabyLM-scale models partially acquire filler-gap dependencies but often fail to generalize or fully capture island constraints.

2023

pdf bib
Rethinking representations: A log-bilinear model of phonotactics
Huteng Dai | Connor Mayer | Richard Futrell
Proceedings of the Society for Computation in Linguistics 2023

2021

pdf bib
Learning nonlocal phonotactics in Strictly Piecewise phonotactic model
Huteng Dai
Proceedings of the Society for Computation in Linguistics 2021

pdf bib
Simple induction of (deterministic) probabilistic finite-state automata for phonotactics by stochastic gradient descent
Huteng Dai | Richard Futrell
Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology

We introduce a simple and highly general phonotactic learner which induces a probabilistic finite-state automaton from word-form data. We describe the learner and show how to parameterize it to induce unrestricted regular languages, as well as how to restrict it to certain subregular classes such as Strictly k-Local and Strictly k-Piecewise languages. We evaluate the learner on its ability to learn phonotactic constraints in toy examples and in datasets of Quechua and Navajo. We find that an unrestricted learner is the most accurate overall when modeling attested forms not seen in training; however, only the learner restricted to the Strictly Piecewise language class successfully captures certain nonlocal phonotactic constraints. Our learner serves as a baseline for more sophisticated methods.

2020

pdf bib
Work in Progress: Information-theoretic characterization of the subregular hierarchy
Huteng Dai | Richard Futrell
Proceedings of the Society for Computation in Linguistics 2020