Jason Riesa


2023

pdf
FRMT: A Benchmark for Few-Shot Region-Aware Machine Translation
Parker Riley | Timothy Dozat | Jan A. Botha | Xavier Garcia | Dan Garrette | Jason Riesa | Orhan Firat | Noah Constant
Transactions of the Association for Computational Linguistics, Volume 11

We present FRMT, a new dataset and evaluation benchmark for Few-shot Region-aware Machine Translation, a type of style-targeted translation. The dataset consists of professional translations from English into two regional variants each of Portuguese and Mandarin Chinese. Source documents are selected to enable detailed analysis of phenomena of interest, including lexically distinct terms and distractor terms. We explore automatic evaluation metrics for FRMT and validate their correlation with expert human evaluation across both region-matched and mismatched rating scenarios. Finally, we present a number of baseline models for this task, and offer guidelines for how researchers can train, evaluate, and compare their own models. Our dataset and evaluation code are publicly available: https://bit.ly/frmt-task.

2020

pdf
Improving Multilingual Models with Language-Clustered Vocabularies
Hyung Won Chung | Dan Garrette | Kiat Chuan Tan | Jason Riesa
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

State-of-the-art multilingual models depend on vocabularies that cover all of the languages the model will expect to see at inference time, but the standard methods for generating those vocabularies are not ideal for massively multilingual applications. In this work, we introduce a novel procedure for multilingual vocabulary generation that combines the separately trained vocabularies of several automatically derived language clusters, thus balancing the trade-off between cross-lingual subword sharing and language-specific vocabularies. Our experiments show improvements across languages on key multilingual benchmark tasks TyDi QA (+2.9 F1), XNLI (+2.1%), and WikiAnn NER (+2.8 F1) and factor of 8 reduction in out-of-vocabulary rate, all without increasing the size of the model or data.

2019

pdf
Small and Practical BERT Models for Sequence Labeling
Henry Tsai | Jason Riesa | Melvin Johnson | Naveen Arivazhagan | Xin Li | Amelia Archer
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)

We propose a practical scheme to train a single multilingual sequence labeling model that yields state of the art results and is small and fast enough to run on a single CPU. Starting from a public multilingual BERT checkpoint, our final model is 6x smaller and 27x faster, and has higher accuracy than a state-of-the-art multilingual baseline. We show that our model especially outperforms on low-resource languages, and works on codemixed input text without being explicitly trained on codemixed examples. We showcase the effectiveness of our method by reporting on part-of-speech tagging and morphological prediction on 70 treebanks and 48 languages.

2018

pdf
A Fast, Compact, Accurate Model for Language Identification of Codemixed Text
Yuan Zhang | Jason Riesa | Daniel Gillick | Anton Bakalov | Jason Baldridge | David Weiss
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

We address fine-grained multilingual language identification: providing a language code for every token in a sentence, including codemixed text containing multiple languages. Such text is prevalent online, in documents, social media, and message boards. We show that a feed-forward network with a simple globally constrained decoder can accurately and rapidly label both codemixed and monolingual text in 100 languages and 100 language pairs. This model outperforms previously published multilingual approaches in terms of both accuracy and speed, yielding an 800x speed-up and a 19.5% averaged absolute gain on three codemixed datasets. It furthermore outperforms several benchmark systems on monolingual language identification.

2012

pdf
Automatic Parallel Fragment Extraction from Noisy Data
Jason Riesa | Daniel Marcu
Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

2011

pdf
Feature-Rich Language-Independent Syntax-Based Alignment for Statistical Machine Translation
Jason Riesa | Ann Irvine | Daniel Marcu
Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing

2010

pdf
Hierarchical Search for Word Alignment
Jason Riesa | Daniel Marcu
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

2006

pdf
Minimally Supervised Morphological Segmentation with Applications to Machine Translation
Jason Riesa | David Yarowsky
Proceedings of the 7th Conference of the Association for Machine Translation in the Americas: Technical Papers

Inflected languages in a low-resource setting present a data sparsity problem for statistical machine translation. In this paper, we present a minimally supervised algorithm for morpheme segmentation on Arabic dialects which reduces unknown words at translation time by over 50%, total vocabulary size by over 40%, and yields a significant increase in BLEU score over a previous state-of-the-art phrase-based statistical MT system.