Keith Stevens


2019

pdf bib
Hierarchical Document Encoder for Parallel Corpus Mining
Mandy Guo | Yinfei Yang | Keith Stevens | Daniel Cer | Heming Ge | Yun-hsuan Sung | Brian Strope | Ray Kurzweil
Proceedings of the Fourth Conference on Machine Translation (Volume 1: Research Papers)

We explore using multilingual document embeddings for nearest neighbor mining of parallel data. Three document-level representations are investigated: (i) document embeddings generated by simply averaging multilingual sentence embeddings; (ii) a neural bag-of-words (BoW) document encoding model; (iii) a hierarchical multilingual document encoder (HiDE) that builds on our sentence-level model. The results show document embeddings derived from sentence-level averaging are surprisingly effective for clean datasets, but suggest models trained hierarchically at the document-level are more effective on noisy data. Analysis experiments demonstrate our hierarchical models are very robust to variations in the underlying sentence embedding quality. Using document embeddings trained with HiDE achieves the state-of-the-art on United Nations (UN) parallel document mining, 94.9% P@1 for en-fr and 97.3% P@1 for en-es.

2018

pdf bib
Effective Parallel Corpus Mining using Bilingual Sentence Embeddings
Mandy Guo | Qinlan Shen | Yinfei Yang | Heming Ge | Daniel Cer | Gustavo Hernandez Abrego | Keith Stevens | Noah Constant | Yun-Hsuan Sung | Brian Strope | Ray Kurzweil
Proceedings of the Third Conference on Machine Translation: Research Papers

This paper presents an effective approach for parallel corpus mining using bilingual sentence embeddings. Our embedding models are trained to produce similar representations exclusively for bilingual sentence pairs that are translations of each other. This is achieved using a novel training method that introduces hard negatives consisting of sentences that are not translations but have some degree of semantic similarity. The quality of the resulting embeddings are evaluated on parallel corpus reconstruction and by assessing machine translation systems trained on gold vs. mined sentence pairs. We find that the sentence embeddings can be used to reconstruct the United Nations Parallel Corpus (Ziemski et al., 2016) at the sentence-level with a precision of 48.9% for en-fr and 54.9% for en-es. When adapted to document-level matching, we achieve a parallel document matching accuracy that is comparable to the significantly more computationally intensive approach of Uszkoreit et al. (2010). Using reconstructed parallel data, we are able to train NMT models that perform nearly as well as models trained on the original data (within 1-2 BLEU).

2012

pdf bib
Exploring Topic Coherence over Many Models and Many Topics
Keith Stevens | Philip Kegelmeyer | David Andrzejewski | David Buttler
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Evaluating Unsupervised Ensembles when applied to Word Sense Induction
Keith Stevens
Proceedings of ACL 2012 Student Research Workshop

2011

pdf bib
Measuring the Impact of Sense Similarity on Word Sense Induction
David Jurgens | Keith Stevens
Proceedings of the First workshop on Unsupervised Learning in NLP

2010

pdf bib
HERMIT: Flexible Clustering for the SemEval-2 WSI Task
David Jurgens | Keith Stevens
Proceedings of the 5th International Workshop on Semantic Evaluation

pdf bib
The S-Space Package: An Open Source Package for Word Space Models
David Jurgens | Keith Stevens
Proceedings of the ACL 2010 System Demonstrations

pdf bib
Capturing Nonlinear Structure in Word Spaces through Dimensionality Reduction
David Jurgens | Keith Stevens
Proceedings of the 2010 Workshop on GEometrical Models of Natural Language Semantics

2009

pdf bib
Event Detection in Blogs using Temporal Random Indexing
David Jurgens | Keith Stevens
Proceedings of the Workshop on Events in Emerging Text Types