Roland Kuhn


2020

pdf bib
The Indigenous Languages Technology project at NRC Canada: An empowerment-oriented approach to developing language software
Roland Kuhn | Fineen Davis | Alain Désilets | Eric Joanis | Anna Kazantseva | Rebecca Knowles | Patrick Littell | Delaney Lothian | Aidan Pine | Caroline Running Wolf | Eddie Santos | Darlene Stewart | Gilles Boulianne | Vishwa Gupta | Brian Maracle Owennatékha | Akwiratékha’ Martin | Christopher Cox | Marie-Odile Junker | Olivia Sammons | Delasie Torkornoo | Nathan Thanyehténhas Brinklow | Sara Child | Benoît Farley | David Huggins-Daines | Daisy Rosenblum | Heather Souter
Proceedings of the 28th International Conference on Computational Linguistics

This paper surveys the first, three-year phase of a project at the National Research Council of Canada that is developing software to assist Indigenous communities in Canada in preserving their languages and extending their use. The project aimed to work within the empowerment paradigm, where collaboration with communities and fulfillment of their goals is central. Since many of the technologies we developed were in response to community needs, the project ended up as a collection of diverse subprojects, including the creation of a sophisticated framework for building verb conjugators for highly inflectional polysynthetic languages (such as Kanyen’kéha, in the Iroquoian language family), release of what is probably the largest available corpus of sentences in a polysynthetic language (Inuktut) aligned with English sentences and experiments with machine translation (MT) systems trained on this corpus, free online services based on automatic speech recognition (ASR) for easing the transcription bottleneck for recordings of speech in Indigenous languages (and other languages), software for implementing text prediction and read-along audiobooks for Indigenous languages, and several other subprojects.

pdf bib
The Nunavut Hansard Inuktitut–English Parallel Corpus 3.0 with Preliminary Machine Translation Results
Eric Joanis | Rebecca Knowles | Roland Kuhn | Samuel Larkin | Patrick Littell | Chi-kiu Lo | Darlene Stewart | Jeffrey Micher
Proceedings of the 12th Language Resources and Evaluation Conference

The Inuktitut language, a member of the Inuit-Yupik-Unangan language family, is spoken across Arctic Canada and noted for its morphological complexity. It is an official language of two territories, Nunavut and the Northwest Territories, and has recognition in additional regions. This paper describes a newly released sentence-aligned Inuktitut–English corpus based on the proceedings of the Legislative Assembly of Nunavut, covering sessions from April 1999 to June 2017. With approximately 1.3 million aligned sentence pairs, this is, to our knowledge, the largest parallel corpus of a polysynthetic language or an Indigenous language of the Americas released to date. The paper describes the alignment methodology used, the evaluation of the alignments, and preliminary experiments on statistical and neural machine translation (SMT and NMT) between Inuktitut and English, in both directions.

2018

pdf bib
Indigenous language technologies in Canada: Assessment, challenges, and successes
Patrick Littell | Anna Kazantseva | Roland Kuhn | Aidan Pine | Antti Arppe | Christopher Cox | Marie-Odile Junker
Proceedings of the 27th International Conference on Computational Linguistics

In this article, we discuss which text, speech, and image technologies have been developed, and would be feasible to develop, for the approximately 60 Indigenous languages spoken in Canada. In particular, we concentrate on technologies that may be feasible to develop for most or all of these languages, not just those that may be feasible for the few most-resourced of these. We assess past achievements and consider future horizons for Indigenous language transliteration, text prediction, spell-checking, approximate search, machine translation, speech recognition, speaker diarization, speech synthesis, optical character recognition, and computer-aided language learning.

2017

pdf bib
NRC Machine Translation System for WMT 2017
Chi-kiu Lo | Boxing Chen | Colin Cherry | George Foster | Samuel Larkin | Darlene Stewart | Roland Kuhn
Proceedings of the Second Conference on Machine Translation

2016

pdf bib
NRC Russian-English Machine Translation System for WMT 2016
Chi-kiu Lo | Colin Cherry | George Foster | Darlene Stewart | Rabib Islam | Anna Kazantseva | Roland Kuhn
Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers

pdf bib
Bilingual Methods for Adaptive Training Data Selection for Machine Translation
Boxing Chen | Roland Kuhn | George Foster | Colin Cherry | Fei Huang
Conferences of the Association for Machine Translation in the Americas: MT Researchers' Track

In this paper, we propose a new data selection method which uses semi-supervised convolutional neural networks based on bitokens (Bi-SSCNNs) for training machine translation systems from a large bilingual corpus. In earlier work, we devised a data selection method based on semi-supervised convolutional neural networks (SSCNNs). The new method, Bi-SSCNN, is based on bitokens, which use bilingual information. When the new methods are tested on two translation tasks (Chinese-to-English and Arabic-to-English), they significantly outperform the other three data selection methods in the experiments. We also show that the BiSSCNN method is much more effective than other methods in preventing noisy sentence pairs from being chosen for training. More interestingly, this method only needs a tiny amount of in-domain data to train the selection model, which makes fine-grained topic-dependent translation adaptation possible. In the follow-up experiments, we find that neural machine translation (NMT) is more sensitive to noisy data than statistical machine translation (SMT). Therefore, Bi-SSCNN which can effectively screen out noisy sentence pairs, can benefit NMT much more than SMT.We observed a BLEU improvement over 3 points on an English-to-French WMT task when Bi-SSCNNs were used.

2015

pdf bib
Multi-level Evaluation for Machine Translation
Boxing Chen | Hongyu Guo | Roland Kuhn
Proceedings of the Tenth Workshop on Statistical Machine Translation

2014

pdf bib
Coarse “split and lump” bilingual language models for richer source information in SMT
Darlene Stewart | Roland Kuhn | Eric Joanis | George Foster
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

Recently, there has been interest in automatically generated word classes for improving statistical machine translation (SMT) quality: e.g, (Wuebker et al, 2013). We create new models by replacing words with word classes in features applied during decoding; we call these “coarse models”. We find that coarse versions of the bilingual language models (biLMs) of (Niehues et al, 2011) yield larger BLEU gains than the original biLMs. BiLMs provide phrase-based systems with rich contextual information from the source sentence; because they have a large number of types, they suffer from data sparsity. Niehues et al (2011) mitigated this problem by replacing source or target words with parts of speech (POSs). We vary their approach in two ways: by clustering words on the source or target side over a range of granularities (word clustering), and by clustering the bilingual units that make up biLMs (bitoken clustering). We find that loglinear combinations of the resulting coarse biLMs with each other and with coarse LMs (LMs based on word classes) yield even higher scores than single coarse models. When we add an appealing “generic” coarse configuration chosen on English > French devtest data to four language pairs (keeping the structure fixed, but providing language-pair-specific models for each pair), BLEU gains on blind test data against strong baselines averaged over 5 runs are +0.80 for English > French, +0.35 for French > English, +1.0 for Arabic > English, and +0.6 for Chinese > English.

pdf bib
A comparison of mixture and vector space techniques for translation model adaptation
Boxing Chen | Roland Kuhn | George Foster
Proceedings of the 11th Conference of the Association for Machine Translation in the Americas: MT Researchers Track

In this paper, we propose two extensions to the vector space model (VSM) adaptation technique (Chen et al., 2013b) for statistical machine translation (SMT), both of which result in significant improvements. We also systematically compare the VSM techniques to three mixture model adaptation techniques: linear mixture, log-linear mixture (Foster and Kuhn, 2007), and provenance features (Chiang et al., 2011). Experiments on NIST Chinese-to-English and Arabic-to-English tasks show that all methods achieve significant improvement over a competitive non-adaptive baseline. Except for the original VSM adaptation method, all methods yield improvements in the +1.7-2.0 BLEU range. Combining them gives further significant improvements of up to +2.6-3.3 BLEU over the baseline.

2013

pdf bib
Adaptation of Reordering Models for Statistical Machine Translation
Boxing Chen | George Foster | Roland Kuhn
Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Simulating Discriminative Training for Linear Mixture Adaptation in Statistical Machine Translation
George Foster | Boxing Chen | Roland Kuhn
Proceedings of Machine Translation Summit XIV: Papers

pdf bib
Transferring markup tags in statistical machine translation: a two-stream approach
Eric Joanis | Darlene Stewart | Samuel Larkin | Roland Kuhn
Proceedings of the 2nd Workshop on Post-editing Technology and Practice

pdf bib
Vector Space Model for Adaptation in Statistical Machine Translation
Boxing Chen | Roland Kuhn | George Foster
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

2012

pdf bib
PORT: a Precision-Order-Recall MT Evaluation Metric for Tuning
Boxing Chen | Roland Kuhn | Samuel Larkin
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf bib
Enlarging Paraphrase Collections through Generalization and Instantiation
Atsushi Fujita | Pierre Isabelle | Roland Kuhn
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning

pdf bib
Improving AMBER, an MT Evaluation Metric
Boxing Chen | Roland Kuhn | George Foster
Proceedings of the Seventh Workshop on Statistical Machine Translation

2011

pdf bib
Unpacking and Transforming Feature Functions: New Ways to Smooth Phrase Tables
Boxing Chen | Roland Kuhn | George Foster | Howard Johnson
Proceedings of Machine Translation Summit XIII: Papers

pdf bib
Semantic smoothing and fabrication of phrase pairs for SMT
Boxing Chen | Roland Kuhn | George Foster
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign

In statistical machine translation systems, phrases with similar meanings often have similar but not identical distributions of translations. This paper proposes a new soft clustering method to smooth the conditional translation probabilities for a given phrase with those of semantically similar phrases. We call this semantic smoothing (SS). Moreover, we fabricate new phrase pairs that were not observed in training data, but which may be used for decoding. In learning curve experiments against a strong baseline, we obtain a consistent pattern of modest improvement from semantic smoothing, and further modest improvement from phrase pair fabrication.

pdf bib
AMBER: A Modified BLEU, Enhanced Ranking Metric
Boxing Chen | Roland Kuhn
Proceedings of the Sixth Workshop on Statistical Machine Translation

2010

pdf bib
Discriminative Instance Weighting for Domain Adaptation in Statistical Machine Translation
George Foster | Cyril Goutte | Roland Kuhn
Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing

pdf bib
Phrase Clustering for Smoothing TM Probabilities - or, How to Extract Paraphrases from Phrase Tables
Roland Kuhn | Boxing Chen | George Foster | Evan Stratford
Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010)

pdf bib
Bilingual Sense Similarity for Statistical Machine Translation
Boxing Chen | George Foster | Roland Kuhn
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics

pdf bib
Translating Structured Documents
George Foster | Pierre Isabelle | Roland Kuhn
Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

Machine Translation traditionally treats documents as sets of independent sentences. In many genres, however, documents are highly structured, and their structure contains information that can be used to improve translation quality. We present a preliminary approach to document translation that uses structural features to modify the behaviour of a language model, at sentence-level granularity. To our knowledge, this is the first attempt to incorporate structural information into statistical MT. In experiments on structured English/French documents from the Hansard corpus, we demonstrate small but statistically significant improvements.

pdf bib
Fast Consensus Hypothesis Regeneration for Machine Translation
Boxing Chen | George Foster | Roland Kuhn
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf bib
Lessons from NRC’s Portage System at WMT 2010
Samuel Larkin | Boxing Chen | George Foster | Ulrich Germann | Eric Joanis | Howard Johnson | Roland Kuhn
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

2009

pdf bib
MT: the Current Research Landscape
Roland Kuhn | Pierre Isabelle | National Research Council | Canada
Proceedings of Machine Translation Summit XII: Plenaries

pdf bib
PortageLive: delivering machine translation technology via virtualization
Patrick Paul | Samuel Larkin | Ulrich Germann | Eric Joanis | Roland Kuhn
Proceedings of Machine Translation Summit XII: Plenaries

pdf bib
Phrase Translation Model Enhanced with Association based Features
Boxing Chen | George Foster | Roland Kuhn
Proceedings of Machine Translation Summit XII: Papers

pdf bib
Stabilizing Minimum Error Rate Training
George Foster | Roland Kuhn
Proceedings of the Fourth Workshop on Statistical Machine Translation

2008

pdf bib
Tighter Integration of Rule-Based and Statistical MT in Serial System Combination
Nicola Ueffing | Jens Stephan | Evgeny Matusov | Loïc Dugast | George Foster | Roland Kuhn | Jean Senellart | Jin Yang
Proceedings of the 22nd International Conference on Computational Linguistics (Coling 2008)

2007

pdf bib
Improving Translation Quality by Discarding Most of the Phrasetable
Howard Johnson | Joel Martin | George Foster | Roland Kuhn
Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL)

pdf bib
Integration of an Arabic Transliteration Module into a Statistical Machine Translation System
Mehdi M. Kashani | Eric Joanis | Roland Kuhn | George Foster | Fred Popowich
Proceedings of the Second Workshop on Statistical Machine Translation

pdf bib
Mixture-Model Adaptation for SMT
George Foster | Roland Kuhn
Proceedings of the Second Workshop on Statistical Machine Translation

pdf bib
Rule-Based Translation with Statistical Phrase-Based Post-Editing
Michel Simard | Nicola Ueffing | Pierre Isabelle | Roland Kuhn
Proceedings of the Second Workshop on Statistical Machine Translation

2006

pdf bib
Segment Choice Models: Feature-Rich Models for Global Distortion in Statistical Machine Translation
Roland Kuhn | Denis Yuen | Michel Simard | Patrick Paul | George Foster | Eric Joanis | Howard Johnson
Proceedings of the Human Language Technology Conference of the NAACL, Main Conference

pdf bib
Phrasetable Smoothing for Statistical Machine Translation
George Foster | Roland Kuhn | Howard Johnson
Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing

pdf bib
PORTAGE: with Smoothed Phrase Tables and Segment Choice Models
Howard Johnson | Fatiha Sadat | George Foster | Roland Kuhn | Michel Simard | Eric Joanis | Samuel Larkin
Proceedings on the Workshop on Statistical Machine Translation

pdf bib
Système de traduction automatique statistique combinant différentes ressources
Fatiha Sadat | George Foster | Roland Kuhn
Actes de la 13ème conférence sur le Traitement Automatique des Langues Naturelles. Posters

Cet article décrit une approche combinant différents modèles statistiques pour la traduction automatique basée sur les segments. Pour ce faire, différentes ressources sont utilisées, dont deux corpus parallèles aux caractéristiques différentes et un dictionnaire de terminologie bilingue et ce, afin d’améliorer la performance quantitative et qualitative du système de traduction. Nous évaluons notre approche sur la paire de langues français-anglais et montrons comment la combinaison des ressources proposées améliore de façon significative les résultats.

2005

pdf bib
PORTAGE: A Phrase-Based Machine Translation System
Fatiha Sadat | Howard Johnson | Akakpo Agbago | George Foster | Roland Kuhn | Joel Martin | Aaron Tikuisis
Proceedings of the ACL Workshop on Building and Using Parallel Texts

1991

pdf bib
Some Results on Stochastic Language Modelling
Renato De Mori | Roland Kuhn
Speech and Natural Language: Proceedings of a Workshop Held at Pacific Grove, California, February 19-22, 1991

1988

pdf bib
Speech Recognition and the Frequency of Recently Used Words: A Modified Markov Model for Natural Language
Roland Kuhn
Coling Budapest 1988 Volume 1: International Conference on Computational Linguistics