Seq2Seq Models with Dropout can Learn Generalizable Reduplication

Brandon Prickett, Aaron Traylor, Joe Pater


Abstract
Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.
Anthology ID:
W18-5810
Volume:
Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology
Month:
October
Year:
2018
Address:
Brussels, Belgium
Editors:
Sandra Kuebler, Garrett Nicolai
Venue:
EMNLP
SIG:
SIGMORPHON
Publisher:
Association for Computational Linguistics
Note:
Pages:
93–100
Language:
URL:
https://aclanthology.org/W18-5810
DOI:
10.18653/v1/W18-5810
Bibkey:
Cite (ACL):
Brandon Prickett, Aaron Traylor, and Joe Pater. 2018. Seq2Seq Models with Dropout can Learn Generalizable Reduplication. In Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 93–100, Brussels, Belgium. Association for Computational Linguistics.
Cite (Informal):
Seq2Seq Models with Dropout can Learn Generalizable Reduplication (Prickett et al., EMNLP 2018)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/W18-5810.pdf