Paul Smolensky


2023

pdf
How Much Do Language Models Copy From Their Training Data? Evaluating Linguistic Novelty in Text Generation Using RAVEN
R. Thomas McCoy | Paul Smolensky | Tal Linzen | Jianfeng Gao | Asli Celikyilmaz
Transactions of the Association for Computational Linguistics, Volume 11

Current language models can generate high-quality text. Are they simply copying text they have seen before, or have they learned generalizable linguistic abstractions? To tease apart these possibilities, we introduce RAVEN, a suite of analyses for assessing the novelty of generated text, focusing on sequential structure (n-grams) and syntactic structure. We apply these analyses to four neural language models trained on English (an LSTM, a Transformer, Transformer-XL, and GPT-2). For local structure—e.g., individual dependencies—text generated with a standard sampling scheme is substantially less novel than our baseline of human-generated text from each model’s test set. For larger-scale structure—e.g., overall sentence structure—model-generated text is as novel or even more novel than the human-generated baseline, but models still sometimes copy substantially, in some cases duplicating passages over 1,000 words long from the training set. We also perform extensive manual analysis, finding evidence that GPT-2 uses both compositional and analogical generalization mechanisms and showing that GPT-2’s novel text is usually well-formed morphologically and syntactically but has reasonably frequent semantic issues (e.g., being self-contradictory).

2021

pdf
Enriching Transformers with Structured Tensor-Product Representations for Abstractive Summarization
Yichen Jiang | Asli Celikyilmaz | Paul Smolensky | Paul Soulos | Sudha Rao | Hamid Palangi | Roland Fernandez | Caitlin Smith | Mohit Bansal | Jianfeng Gao
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Abstractive summarization, the task of generating a concise summary of input documents, requires: (1) reasoning over the source document to determine the salient pieces of information scattered across the long document, and (2) composing a cohesive text by reconstructing these salient facts into a shorter summary that faithfully reflects the complex relations connecting these facts. In this paper, we adapt TP-Transformer (Schlag et al., 2019), an architecture that enriches the original Transformer (Vaswani et al., 2017) with the explicitly compositional Tensor Product Representation (TPR), for the task of abstractive summarization. The key feature of our model is a structural bias that we introduce by encoding two separate representations for each token to represent the syntactic structure (with role vectors) and semantic content (with filler vectors) separately. The model then binds the role and filler vectors into the TPR as the layer output. We argue that the structured intermediate representations enable the model to take better control of the contents (salient facts) and structures (the syntax that connects the facts) when generating the summary. Empirically, we show that our TP-Transformer outperforms the Transformer and the original TP-Transformer significantly on several abstractive summarization datasets based on both automatic and human evaluations. On several syntactic and semantic probing tasks, we demonstrate the emergent structural information in the role vectors and the performance gain by information specificity of the role vectors and improved syntactic interpretability in the TPR layer outputs.(Code and models are available at https://github.com/jiangycTarheel/TPT-Summ)

pdf
Structural Biases for Improving Transformers on Translation into Morphologically Rich Languages
Paul Soulos | Sudha Rao | Caitlin Smith | Eric Rosen | Asli Celikyilmaz | R. Thomas McCoy | Yichen Jiang | Coleman Haley | Roland Fernandez | Hamid Palangi | Jianfeng Gao | Paul Smolensky
Proceedings of the 4th Workshop on Technologies for MT of Low Resource Languages (LoResMT2021)

Machine translation has seen rapid progress with the advent of Transformer-based models. These models have no explicit linguistic structure built into them, yet they may still implicitly learn structured relationships by attending to relevant tokens. We hypothesize that this structural learning could be made more robust by explicitly endowing Transformers with a structural bias, and we investigate two methods for building in such a bias. One method, the TP-Transformer, augments the traditional Transformer architecture to include an additional component to represent structure. The second method imbues structure at the data level by segmenting the data with morphological tokenization. We test these methods on translating from English into morphologically rich languages, Turkish and Inuktitut, and consider both automatic metrics and human evaluations. We find that each of these two approaches allows the network to achieve better performance, but this improvement is dependent on the size of the dataset. In sum, structural encoding methods make Transformers more sample-efficient, enabling them to perform better from smaller amounts of data.

pdf
Emergent Gestural Scores in a Recurrent Neural Network Model of Vowel Harmony
Caitlin Smith | Charlie O’Hara | Eric Rosen | Paul Smolensky
Proceedings of the Society for Computation in Linguistics 2021

pdf
Testing for Grammatical Category Abstraction in Neural Language Models
Najoung Kim | Paul Smolensky
Proceedings of the Society for Computation in Linguistics 2021

2020

pdf
Tensor Product Decomposition Networks: Uncovering Representations of Structure Learned by Neural Networks
R. Thomas McCoy | Tal Linzen | Ewan Dunbar | Paul Smolensky
Proceedings of the Society for Computation in Linguistics 2020

pdf
Discovering the Compositional Structure of Vector Representations with Role Learning Networks
Paul Soulos | R. Thomas McCoy | Tal Linzen | Paul Smolensky
Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP

How can neural networks perform so well on compositional tasks even though they lack explicit compositional representations? We use a novel analysis technique called ROLE to show that recurrent neural networks perform well on such tasks by converging to solutions which implicitly represent symbolic structure. This method uncovers a symbolic structure which, when properly embedded in vector space, closely approximates the encodings of a standard seq2seq network trained to perform the compositional SCAN task. We verify the causal importance of the discovered symbolic structure by showing that, when we systematically manipulate hidden embeddings based on this symbolic structure, the model’s output is changed in the way predicted by our analysis.

pdf
Invertible Tree Embeddings using a Cryptographic Role Embedding Scheme
Coleman Haley | Paul Smolensky
Proceedings of the 28th International Conference on Computational Linguistics

We present a novel method for embedding trees in a vector space based on Tensor-Product Representations (TPRs) which allows for inversion: the retrieval of the original tree structure and nodes from the vectorial embedding. Unlike previous attempts, this does not come at the cost of intractable representation size; we utilize a method for non-exact inversion, showing that it works well when there is sufficient randomness in the representation scheme for simple data and providing an upper bound on its error. To handle the huge number of possible tree positions without memoizing position representation vectors, we present a method (Cryptographic Role Embedding) using cryptographic hashing algorithms that allows for the representation of unboundedly many positions. Through experiments on parse tree data, we show a 30,000-dimensional Cryptographic Role Embedding of trees can provide invertibility with error < 1% that previous methods would require 8.6 × 1057 dimensions to represent.

2019

pdf
Augmentic Compositional Models for Knowledge Base Completion Using Gradient Representations
Matthias R. Lalisse | Paul Smolensky
Proceedings of the Society for Computation in Linguistics (SCiL) 2019

2018

pdf
Dynamic encoding of structural uncertainty in gradient symbols
Pyeong Whan Cho | Matthew Goldrick | Richard L. Lewis | Paul Smolensky
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)

pdf
Tensor Product Generation Networks for Deep NLP Modeling
Qiuyuan Huang | Paul Smolensky | Xiaodong He | Li Deng | Dapeng Wu
Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers)

We present a new approach to the design of deep networks for natural language processing (NLP), based on the general technique of Tensor Product Representations (TPRs) for encoding and processing symbol structures in distributed neural networks. A network architecture — the Tensor Product Generation Network (TPGN) — is proposed which is capable in principle of carrying out TPR computation, but which uses unconstrained deep learning to design its internal representations. Instantiated in a model for image-caption generation, TPGN outperforms LSTM baselines when evaluated on the COCO dataset. The TPR-capable structure enables interpretation of internal representations and operations, which prove to contain considerable grammatical content. Our caption-generation model can be interpreted as generating sequences of grammatical categories and retrieving words by their categories from a plan encoded as a distributed representation.

1994

pdf
Optimality Theory: Universal Grammar, Learning and Parsing Algorithms, and Connectionist Foundations (Abstract)
Paul Smolensky | Bruce Tesar
32nd Annual Meeting of the Association for Computational Linguistics