Timothy J. Hazen

Also published as: T. J. Hazen


2021

pdf
Increasing Robustness to Spurious Correlations using Forgettable Examples
Yadollah Yaghoobzadeh | Soroush Mehri | Remi Tachet des Combes | T. J. Hazen | Alessandro Sordoni
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume

Neural NLP models tend to rely on spurious correlations between labels and input features to perform their tasks. Minority examples, i.e., examples that contradict the spurious correlations present in the majority of data points, have been shown to increase the out-of-distribution generalization of pre-trained language models. In this paper, we first propose using example forgetting to find minority examples without prior knowledge of the spurious correlations present in the dataset. Forgettable examples are instances either learned and then forgotten during training or never learned. We show empirically how these examples are related to minorities in our training sets. Then, we introduce a new approach to robustify models by fine-tuning our models twice, first on the full training data and second on the minorities only. We obtain substantial improvements in out-of-distribution generalization when applying our approach to the MNLI, QQP and FEVER datasets.

2020

pdf bib
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems
William M. Campbell | Alex Waibel | Dilek Hakkani-Tur | Timothy J. Hazen | Kevin Kilgour | Eunah Cho | Varun Kumar | Hadrien Glaude
Proceedings of the 2nd Workshop on Life-long Learning for Spoken Language Systems

2019

pdf
Ranking Passages for Argument Convincingness
Peter Potash | Adam Ferguson | Timothy J. Hazen
Proceedings of the 6th Workshop on Argument Mining

In data ranking applications, pairwise annotation is often more consistent than cardinal annotation for learning ranking models. We examine this in a case study on ranking text passages for argument convincingness. Our task is to choose text passages that provide the highest-quality, most-convincing arguments for opposing sides of a topic. Using data from a deployed system within the Bing search engine, we construct a pairwise-labeled dataset for argument convincingness that is substantially more comprehensive in topical coverage compared to existing public resources. We detail the process of extracting topical passages for queries submitted to a search engine, creating annotated sets of passages aligned to different stances on a topic, and assessing argument convincingness of passages using pairwise annotation. Using a state-of-the-art convincingness model, we evaluate several methods for using pairwise-annotated data examples to train models for ranking passages. Our results show pairwise training outperforms training that regresses to a target score for each passage. Our results also show a simple ‘win-rate’ score is a better regression target than the previously proposed page-rank target. Lastly, addressing the need to filter noisy crowd-sourced annotations when constructing a dataset, we show that filtering for transitivity within pairwise annotations is more effective than filtering based on annotation confidence measures for individual examples.

pdf
Probing for Semantic Classes: Diagnosing the Meaning Content of Word Embeddings
Yadollah Yaghoobzadeh | Katharina Kann | T. J. Hazen | Eneko Agirre | Hinrich Schütze
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics

Word embeddings typically represent different meanings of a word in a single conflated vector. Empirical analysis of embeddings of ambiguous words is currently limited by the small size of manually annotated resources and by the fact that word senses are treated as unrelated individual concepts. We present a large dataset based on manual Wikipedia annotations and word senses, where word senses from different words are related by semantic classes. This is the basis for novel diagnostic tests for an embedding’s content: we probe word embeddings for semantic classes and analyze the embedding space by classifying embeddings into semantic classes. Our main findings are: (i) Information about a sense is generally represented well in a single-vector embedding – if the sense is frequent. (ii) A classifier can accurately predict whether a word is single-sense or multi-sense, based only on its embedding. (iii) Although rare senses are not well represented in single-vector embeddings, this does not have negative impact on an NLP application whose performance depends on frequent senses.

2018

pdf bib
Systematic Error Analysis of the Stanford Question Answering Dataset
Marc-Antoine Rondeau | T. J. Hazen
Proceedings of the Workshop on Machine Reading for Question Answering

We analyzed the outputs of multiple question answering (QA) models applied to the Stanford Question Answering Dataset (SQuAD) to identify the core challenges for QA systems on this data set. Through an iterative process, challenging aspects were hypothesized through qualitative analysis of the common error cases. A classifier was then constructed to predict whether SQuAD test examples were likely to be difficult for systems to answer based on features associated with the hypothesized aspects. The classifier’s performance was used to accept or reject each aspect as an indicator of difficulty. With this approach, we ensured that our hypotheses were systematically tested and not simply accepted based on our pre-existing biases. Our explanations are not accepted based on human evaluation of individual examples. This process also enabled us to identify the primary QA strategy learned by the models, i.e., systems determined the acceptable answer type for a question and then selected the acceptable answer span of that type containing the highest density of words present in the question within its local vicinity in the passage.

2006

pdf
Automatic Spoken Document Processing for Retrieval and Browsing
Ciprian Chelba | T. J. Hazen
Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Tutorial Abstracts

2005

pdf
The MIT Spoken Lecture Processing Project
James R. Glass | Timothy J. Hazen | D. Scott Cyphers | Ken Schutte | Alex Park
Proceedings of HLT/EMNLP 2005 Interactive Demonstrations

2004

pdf bib
Analysis and Processing of Lecture Audio Data: Preliminary Investigations
James Glass | Timothy J. Hazen | Lee Hetherington | Chao Wang
Proceedings of the Workshop on Interdisciplinary Approaches to Speech Indexing and Retrieval at HLT-NAACL 2004