Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)

Ted Gibson, Tal Linzen, Asad Sayeed, Martin van Schijndel, William Schuler (Editors)


Anthology ID:
W17-07
Month:
April
Year:
2017
Address:
Valencia, Spain
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
URL:
https://aclanthology.org/W17-07
DOI:
10.18653/v1/W17-07
Bib Export formats:
BibTeX

pdf bib
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
Ted Gibson | Tal Linzen | Asad Sayeed | Martin van Schijndel | William Schuler

pdf bib
Entropy Reduction correlates with temporal lobe activity
Matthew Nelson | Stanislas Dehaene | Christophe Pallier | John Hale

Using the Entropy Reduction incremental complexity metric, we relate high gamma power signals from the brains of epileptic patients to incremental stages of syntactic analysis in English and French. We find that signals recorded intracranially from the anterior Inferior Temporal Sulcus (aITS) and the posterior Inferior Temporal Gyrus (pITG) correlate with word-by-word Entropy Reduction values derived from phrase structure grammars for those languages. In the anterior region, this correlation persists even in combination with surprisal co-predictors from PCFG and ngram models. The result confirms the idea that the brain’s temporal lobe houses a parsing function, one whose incremental processing difficulty profile reflects changes in grammatical uncertainty.

pdf bib
Learning an Input Filter for Argument Structure Acquisition
Laurel Perkins | Naomi Feldman | Jeffrey Lidz

How do children learn a verb’s argument structure when their input contains nonbasic clauses that obscure verb transitivity? Here we present a new model that infers verb transitivity by learning to filter out non-basic clauses that were likely parsed in error. In simulations with child-directed speech, we show that this model accurately categorizes the majority of 50 frequent transitive, intransitive and alternating verbs, and jointly learns appropriate parameters for filtering parsing errors. Our model is thus able to filter out problematic data for verb learning without knowing in advance which data need to be filtered.

pdf
Grounding sound change in ideal observer models of perception
Zachary Burchill | T. Florian Jaeger

An important predictor of historical sound change, functional load, fails to capture insights from speech perception. Building on ideal observer models of word recognition, we devise a new definition of functional load that incorporates both a priori predictability and perceptual information. We explore this new measure with a simple model and find that it outperforms traditional measures.

pdf
“Oh, I’ve Heard That Before”: Modelling Own-Dialect Bias After Perceptual Learning by Weighting Training Data
Rachael Tatman

Human listeners are able to quickly and robustly adapt to new accents and do so by using information about speaker’s identities. This paper will present experimental evidence that, even considering information about speaker’s identities, listeners retain a strong bias towards the acoustics of their own dialect after dialect learning. Participants’ behaviour was accurately mimicked by a classifier which was trained on more cases from the base dialect and fewer from the target dialect. This suggests that imbalanced training data may result in automatic speech recognition errors consistent with those of speakers from populations over-represented in the training data.

pdf
Inherent Biases of Recurrent Neural Networks for Phonological Assimilation and Dissimilation
Amanda Doucette

A recurrent neural network model of phonological pattern learning is proposed. The model is a relatively simple neural network with one recurrent layer, and displays biases in learning that mimic observed biases in human learning. Single-feature patterns are learned faster than two-feature patterns, and vowel or consonant-only patterns are learned faster than patterns involving vowels and consonants, mimicking the results of laboratory learning experiments. In non-recurrent models, capturing these biases requires the use of alpha features or some other representation of repeated features, but with a recurrent neural network, these elaborations are not necessary.

pdf
Predicting Japanese scrambling in the wild
Naho Orita

Japanese speakers have a choice between canonical SOV and scrambled OSV word order to express the same meaning. Although previous experiments examine the influence of one or two factors for scrambling in a controlled setting, it is not yet known what kinds of multiple effects contribute to scrambling. This study uses naturally distributed data to test the multiple effects on scrambling simultaneously. A regression analysis replicates the NP length effect and suggests the influence of noun types, but it provides no evidence for syntactic priming, given-new ordering, and the animacy effect. These findings only show evidence for sentence-internal factors, but we find no evidence that discourse level factors play a role.