Dana Angluin


2022

pdf
Formal Language Recognition by Hard Attention Transformers: Perspectives from Circuit Complexity
Yiding Hao | Dana Angluin | Robert Frank
Transactions of the Association for Computational Linguistics, Volume 10

This paper analyzes three formal models of Transformer encoders that differ in the form of their self-attention mechanism: unique hard attention (UHAT); generalized unique hard attention (GUHAT), which generalizes UHAT; and averaging hard attention (AHAT). We show that UHAT and GUHAT Transformers, viewed as string acceptors, can only recognize formal languages in the complexity class AC0, the class of languages recognizable by families of Boolean circuits of constant depth and polynomial size. This upper bound subsumes Hahn’s (2020) results that GUHAT cannot recognize the DYCK languages or the PARITY language, since those languages are outside AC0 (Furst et al., 1984). In contrast, the non-AC0 languages MAJORITY and DYCK-1 are recognizable by AHAT networks, implying that AHAT can recognize languages that UHAT and GUHAT cannot.

2018

pdf
Context-Free Transductions with Neural Stacks
Yiding Hao | William Merrill | Dana Angluin | Robert Frank | Noah Amsel | Andrew Benz | Simon Mendelsohn
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

This paper analyzes the behavior of stack-augmented recurrent neural network (RNN) models. Due to the architectural similarity between stack RNNs and pushdown transducers, we train stack RNN models on a number of tasks, including string reversal, context-free language modelling, and cumulative XOR evaluation. Examining the behavior of our networks, we show that stack-augmented RNNs can discover intuitive stack-based strategies for solving our tasks. However, stack RNNs are more difficult to train than classical architectures such as LSTMs. Rather than employ stack-based strategies, more complex networks often find approximate solutions by using the stack as unstructured memory.

2011

pdf
Effects of Meaning-Preserving Corrections on Language Learning
Dana Angluin | Leonor Becerra-Bonache
Proceedings of the Fifteenth Conference on Computational Natural Language Learning

2009

pdf
Experiments Using OSTIA for a Language Production Task
Dana Angluin | Leonor Becerra-Bonache
Proceedings of the EACL 2009 Workshop on Computational Linguistic Aspects of Grammatical Inference