Caitlin Smith


2021

pdf
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos | Sudha Rao | Caitlin Smith | Eric Rosen | Asli Celikyilmaz | R. Thomas McCoy | Yichen Jiang | Coleman Haley | Roland Fernandez | Hamid Palangi | Jianfeng Gao | Paul Smolensky
Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021)

Machine translation has seen rapid progress with the advent of Transformer-based models. These models have no explicit linguistic structure built into them, yet they may still implicitly learn structured relationships by attending to relevant tokens. We hypothesize that this structural learning could be made more robust by explicitly endowing Transformers with a structural bias, and we investigate two methods for building in such a bias. One method, the TP-Transformer, augments the traditional Transformer architecture to include an additional component to represent structure. The second method imbues structure at the data level by segmenting the data with morphological tokenization. We test these methods on translating from English into morphologically rich languages, Turkish and Inuktitut, and consider both automatic metrics and human evaluations. We find that each of these two approaches allows the network to achieve better performance, but this improvement is dependent on the size of the dataset. In sum, structural encoding methods make Transformers more sample-efficient, enabling them to perform better from smaller amounts of data.

pdf
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Yichen Jiang | Asli Celikyilmaz | Paul Smolensky | Paul Soulos | Sudha Rao | Hamid Palangi | Roland Fernandez | Caitlin Smith | Mohit Bansal | Jianfeng Gao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Abstractive summarization, the task of generating a concise summary of input documents, requires: (1) reasoning over the source document to determine the salient pieces of information scattered across the long document, and (2) composing a cohesive text by reconstructing these salient facts into a shorter summary that faithfully reflects the complex relations connecting these facts. In this paper, we adapt TP-Transformer (Schlag et al., 2019), an architecture that enriches the original Transformer (Vaswani et al., 2017) with the explicitly compositional Tensor Product Representation (TPR), for the task of abstractive summarization. The key feature of our model is a structural bias that we introduce by encoding two separate representations for each token to represent the syntactic structure (with role vectors) and semantic content (with filler vectors) separately. The model then binds the role and filler vectors into the TPR as the layer output. We argue that the structured intermediate representations enable the model to take better control of the contents (salient facts) and structures (the syntax that connects the facts) when generating the summary. Empirically, we show that our TP-Transformer outperforms the Transformer and the original TP-Transformer significantly on several abstractive summarization datasets based on both automatic and human evaluations. On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and the performance gain by information specificity of the role vectors and improved syntactic interpretability in the TPR layer outputs.(Code and models are available at https://github.com/jiangycTarheel/TPT-Summ)

pdf
Emergent Gestural Scores in a Recurrent Neural Network Model of Vowel Harmony
Caitlin Smith | Charlie O’Hara | Eric Rosen | Paul Smolensky
Proceedings of the Society for Computation in Linguistics 2021

pdf
Learnability of derivationally opaque processes in the Gestural Harmony Model
Caitlin Smith | Charlie O’Hara
Proceedings of the Society for Computation in Linguistics 2021

2019

pdf
Weakly deterministic transformations are subregular
Andrew Lamont | Charlie O’Hara | Caitlin Smith
Proceedings of the 16th Workshop on Computational Research in Phonetics, Phonology, and Morphology

Whether phonological transformations in general are subregular is an open question. This is the case for most transformations, which have been shown to be subsequential, but it is not known whether weakly deterministic mappings form a proper subset of the regular functions. This paper demonstrates that there are regular functions that are not weakly deterministic, and, because all attested processes are weakly deterministic, supports the subregular hypothesis.