Aida Nematzadeh


2021

pdf bib
Decoupling the Role of Data, Attention, and Losses in Multimodal Transformers
Lisa Anne Hendricks | John Mellor | Rosalia Schneider | Jean-Baptiste Alayrac | Aida Nematzadeh
Transactions of the Association for Computational Linguistics, Volume 9

Abstract Recently, multimodal transformer models have gained popularity because their performance on downstream tasks suggests they learn rich visual-linguistic representations. Focusing on zero-shot image retrieval tasks, we study three important factors that can impact the quality of learned representations: pretraining data, the attention mechanism, and loss functions. By pretraining models on six datasets, we observe that dataset noise and language similarity to our downstream task are important indicators of model performance. Through architectural analysis, we learn that models with a multimodal attention mechanism can outperform deeper models with modality-specific attention mechanisms. Finally, we show that successful contrastive losses used in the self-supervised learning literature do not yield similar performance gains when used in multimodal transformers.

pdf bib
Probing Image-Language Transformers for Verb Understanding
Lisa Anne Hendricks | Aida Nematzadeh
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf bib
Learning to Segment Actions from Observation and Narration
Daniel Fried | Jean-Baptiste Alayrac | Phil Blunsom | Chris Dyer | Stephen Clark | Aida Nematzadeh
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

We apply a generative segmental model of task structure, guided by narration, to action segmentation in video. We focus on unsupervised and weakly-supervised settings where no action labels are known during training. Despite its simplicity, our model performs competitively with previous work on a dataset of naturalistic instructional videos. Our model allows us to vary the sources of supervision used in training, and we find that both task structure and narrative language provide large benefits in segmentation quality.

2019

pdf bib
Language Learning and Processing in People and Machines
Aida Nematzadeh | Richard Futrell | Roger Levy
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials

The goal of this tutorial is to bring the fields of computational linguistics and computational cognitive science closer: we will introduce different stages of language acquisition and their parallel problems in NLP. As an example, one of the early challenges children face is mapping the meaning of word labels (such as “cat”) to their referents (the furry animal in the living room). Word learning is similar to the word alignment problem in machine translation. We explain the current computational models of language acquisition, their limitations, and how the insights from these models can be incorporated into NLP applications. Moreover, we discuss how we can take advantage of the cognitive science of language in computational linguistics: for example, by designing cognitively-motivated evaluations task or buildings language-learning inductive biases into our models.

2018

pdf bib
Evaluating Theory of Mind in Question Answering
Aida Nematzadeh | Kaylee Burns | Erin Grant | Alison Gopnik | Tom Griffiths
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We propose a new dataset for evaluating question answering models with respect to their capacity to reason about beliefs. Our tasks are inspired by theory-of-mind experiments that examine whether children are able to reason about the beliefs of others, in particular when those beliefs differ from reality. We evaluate a number of recent neural models with memory augmentation. We find that all fail on our tasks, which require keeping track of inconsistent states of the world; moreover, the models’ accuracy decreases notably when random sentences are introduced to the tasks at test.

pdf bib
Predicting and Explaining Human Semantic Search in a Cognitive Model
Filip Miscevic | Aida Nematzadeh | Suzanne Stevenson
Proceedings of the 8th Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2018)

pdf bib
Exploiting Attention to Reveal Shortcomings in Memory Models
Kaylee Burns | Aida Nematzadeh | Erin Grant | Alison Gopnik | Tom Griffiths
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

The decision making processes of deep networks are difficult to understand and while their accuracy often improves with increased architectural complexity, so too does their opacity. Practical use of machine learning models, especially for question and answering applications, demands a system that is interpretable. We analyze the attention of a memory network model to reconcile contradictory performance on a challenging question-answering dataset that is inspired by theory-of-mind experiments. We equate success on questions to task classification, which explains not only test-time failures but also how well the model generalizes to new training conditions.

2015

pdf bib
A Computational Cognitive Model of Novel Word Generalization
Aida Nematzadeh | Erin Grant | Suzanne Stevenson
Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing

2014

pdf bib
A Cognitive Model of Semantic Network Learning
Aida Nematzadeh | Afsaneh Fazly | Suzanne Stevenson
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2012

pdf bib
A Computational Model of Memory, Attention, and Word Learning
Aida Nematzadeh | Afsaneh Fazly | Suzanne Stevenson
Proceedings of the 3rd Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2012)