Learning an Input Filter for Argument Structure Acquisition

Laurel Perkins, Naomi Feldman, Jeffrey Lidz


Abstract
How do children learn a verb’s argument structure when their input contains nonbasic clauses that obscure verb transitivity? Here we present a new model that infers verb transitivity by learning to filter out non-basic clauses that were likely parsed in error. In simulations with child-directed speech, we show that this model accurately categorizes the majority of 50 frequent transitive, intransitive and alternating verbs, and jointly learns appropriate parameters for filtering parsing errors. Our model is thus able to filter out problematic data for verb learning without knowing in advance which data need to be filtered.
Anthology ID:
W17-0702
Volume:
Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017)
Month:
April
Year:
2017
Address:
Valencia, Spain
Venue:
CMCL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
11–19
Language:
URL:
https://aclanthology.org/W17-0702
DOI:
10.18653/v1/W17-0702
Bibkey:
Cite (ACL):
Laurel Perkins, Naomi Feldman, and Jeffrey Lidz. 2017. Learning an Input Filter for Argument Structure Acquisition. In Proceedings of the 7th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2017), pages 11–19, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Learning an Input Filter for Argument Structure Acquisition (Perkins et al., CMCL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/W17-0702.pdf