Brandon Prickett


2025

pdf bib
Probing Neural Network Generalization using Default Patterns
Brandon Prickett | Tianyi Nyu | Katya Pertsova
Proceedings of the The 22nd SIGMORPHON workshop on Computational Morphology, Phonology, and Phonetics

Whether neural-net models can learn minoritydefault patterns has been a matter of some controversy. Results based on modeling real human language data are hard to interpret due to complexity. Therefore, we examine the learning of a simple artificial language pattern involving defaults using three computational models”:" an Encoder-Decoder RNN, a Transformer Encoder, and a Logistic Regression. Overall, we find that the models have the hardest time with minority defaults, but can eventually learn them and apply them to novel words (although not always extend them to completely novel segments or novel CV-sequences). Typefrequency has the largest effect on learning in all models, trumping the effect of distribution. We examine the weights of two models to provide further insights into how defaults are represented inside the models.

2023

pdf bib
Proceedings of the Society for Computation in Linguistics 2023
Tim Hunter | Brandon Prickett
Proceedings of the Society for Computation in Linguistics 2023

2022

pdf bib
Proceedings of the Society for Computation in Linguistics 2022
Allyson Ettinger | Tim Hunter | Brandon Prickett
Proceedings of the Society for Computation in Linguistics 2022

pdf bib
Learning Stress Patterns with a Sequence-to-Sequence Neural Network
Brandon Prickett | Joe Pater
Proceedings of the Society for Computation in Linguistics 2022

2021

pdf bib
Proceedings of the Society for Computation in Linguistics 2021
Allyson Ettinger | Ellie Pavlick | Brandon Prickett
Proceedings of the Society for Computation in Linguistics 2021

2020

pdf bib
Probing RNN Encoder-Decoder Generalization of Subregular Functions using Reduplication
Max Nelson | Hossep Dolatian | Jonathan Rawski | Brandon Prickett
Proceedings of the Society for Computation in Linguistics 2020

2019

pdf bib
Learning Exceptionality and Variation with Lexically Scaled MaxEnt
Coral Hughto | Andrew Lamont | Brandon Prickett | Gaja Jarosz
Proceedings of the Society for Computation in Linguistics (SCiL) 2019

2018

pdf bib
Seq2Seq Models with Dropout can Learn Generalizable Reduplication
Brandon Prickett | Aaron Traylor | Joe Pater
Proceedings of the Fifteenth Workshop on Computational Research in Phonetics, Phonology, and Morphology

Natural language reduplication can pose a challenge to neural models of language, and has been argued to require variables (Marcus et al., 1999). Sequence-to-sequence neural networks have been shown to perform well at a number of other morphological tasks (Cotterell et al., 2016), and produce results that highly correlate with human behavior (Kirov, 2017; Kirov & Cotterell, 2018) but do not include any explicit variables in their architecture. We find that they can learn a reduplicative pattern that generalizes to novel segments if they are trained with dropout (Srivastava et al., 2014). We argue that this matches the scope of generalization observed in human reduplication.