Tom Griffiths


2018

pdf
Exploiting Attention to Reveal Shortcomings in Memory Models
Kaylee Burns | Aida Nematzadeh | Erin Grant | Alison Gopnik | Tom Griffiths
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments. We equate success on questions to task classification, which explains not only test-time failures but also how well the model generalizes to new training conditions.

pdf
Evaluating Theory of Mind in Question Answering
Aida Nematzadeh | Kaylee Burns | Erin Grant | Alison Gopnik | Tom Griffiths
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs. Our tasks are inspired by theory-of-mind experiments that examine whether children are able to reason about the beliefs of others, in particular when those beliefs differ from reality. We evaluate a number of recent neural models with memory augmentation. We find that all fail on our tasks, which require keeping track of inconsistent states of the world; moreover, the models’ accuracy decreases notably when random sentences are introduced to the tasks at test.

2007

pdf
A fully Bayesian approach to unsupervised part-of-speech tagging
Sharon Goldwater | Tom Griffiths
Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics