Conference of the Association for Machine Translation in the Americas (2010)



up

bib (full) Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Research Papers

pdf bib
Discriminative Syntactic Reranking for Statistical Machine Translation
Simon Carter | Christof Monz

This paper describes a method that successfully exploits simple syntactic features for n-best translation candidate reranking using perceptrons. Our approach uses discriminative language modelling to rerank the n-best translations generated by a statistical machine translation system. The performance is evaluated for Arabic-to-English translation using NIST’s MT-Eval benchmarks. Whilst parse trees do not consistently help, we show how features extracted from a simple Part-of-Speech annotation layer outperform two competitive baselines, leading to significant BLEU improvements on three different test sets.

pdf bib
Fast Approximate String Matching with Suffix Arrays and A* Parsing
Philipp Koehn | Jean Senellart

We present a novel exact solution to the approximate string matching problem in the context of translation memories, where a text segment has to be matched against a large corpus, while allowing for errors. We use suffix arrays to detect exact n-gram matches, A* search heuristics to discard matches and A* parsing to validate candidate segments. The method outperforms the canonical baseline by a factor of 100, with average lookup times of 4.3–247ms for a segment in a realistic scenario.

pdf bib
Combining Confidence Estimation and Reference-based Metrics for Segment-level MT Evaluation
Lucia Specia | Jesús Giménez

We describe an effort to improve standard reference-based metrics for Machine Translation (MT) evaluation by enriching them with Confidence Estimation (CE) features and using a learning mechanism trained on human annotations. Reference-based MT evaluation metrics compare the system output against reference translations looking for overlaps at different levels (lexical, syntactic, and semantic). These metrics aim at comparing MT systems or analyzing the progress of a given system and are known to have reasonably good correlation with human judgments at the corpus level, but not at the segment level. CE metrics, on the other hand, target the system in use, providing a quality score to the end-user for each translated segment. They cannot rely on reference translations, and use instead information extracted from the input text, system output and possibly external corpora to train machine learning algorithms. These metrics correlate better with human judgments at the segment level. However, they are usually highly biased by difficulty level of the input segment, and therefore are less appropriate for comparing multiple systems translating the same input segments. We show that these two classes of metrics are complementary and can be combined to provide MT evaluation metrics that achieve higher correlation with human judgments at the segment level.

pdf
The Impact of Arabic Morphological Segmentation on Broad-coverage English-to-Arabic Statistical Machine Translation
Hassan Al-Haj | Alon Lavie

Morphologically rich languages pose a challenge for statistical machine translation (SMT). This challenge is magnified when translating into a morphologically rich language. In this work we address this challenge in the framework of a broad-coverage English-to-Arabic phrase based statistical machine translation (PBSMT). We explore the full spectrum of Arabic segmentation schemes ranging from full word form to fully segmented forms and examine the effects on system performance. Our results show a difference of 2.61 BLEU points between the best and worst segmentation schemes indicating that the choice of the segmentation scheme has a significant effect on the performance of a PBSMT system in a large data scenario. We also show that a simple segmentation scheme can perform as good as the best and more complicated segmentation scheme. We also report results on a wide set of techniques for recombining the segmented Arabic output.

pdf
Arabic Dialect Handling in Hybrid Machine Translation
Hassan Sawaf

In this paper, we describe an extension to a hybrid machine translation system for handling dialect Arabic, using a decoding algorithm to normalize non-standard, spontaneous and dialectal Arabic into Modern Standard Arabic. We prove the feasibility of the approach by measuring and comparing machine translation results in terms of BLEU with and without the proposed approach. We show in our tests that on real-live broadcast input with transcriptions of dialectal speech we achieve an increase on BLEU of about 1%, and on web content with dialect text of about 2%.

pdf
Coupling Statistical Machine Translation with Rule-based Transfer and Generation
Arafat Ahsan | Prasanth Kolachina | Sudheer Kolachina | Dipti Misra | Rajeev Sangal

In this paper, we present the insights gained from a detailed study of coupling a highly modular English-Hindi RBMT system with a standard phrase-based SMT system. Coupling the RBMT and SMT systems at various stages in the RBMT pipeline, we observe the effects of the source transformations at each stage on the performance of the coupled MT system. We propose an architecture that systematically exploits the structural transfer and robust generation capabilities of the RBMT system. Working with the English-Hindi language pair, we show that the coupling configurations explored in our experiments help address different aspects of the typological divergence between these languages. In spite of working with very small datasets, we report significant improvements both in terms of BLEU (7.14 and 0.87 over the RBMT and the SMT baselines respectively) and subjective evaluation (relative decrease of 17% in SSER).

pdf
Semantically-Informed Syntactic Machine Translation: A Tree-Grafting Approach
Kathryn Baker | Michael Bloodgood | Chris Callison-Burch | Bonnie Dorr | Nathaniel Filardo | Lori Levin | Scott Miller | Christine Piatko

We describe a unified and coherent syntactic framework for supporting a semantically-informed syntactic approach to statistical machine translation. Semantically enriched syntactic tags assigned to the target-language training texts improved translation quality. The resulting system significantly outperformed a linguistically naive baseline model (Hiero), and reached the highest scores yet reported on the NIST 2009 Urdu-English translation task. This finding supports the hypothesis (posed by many researchers in the MT community, e.g., in DARPA GALE) that both syntactic and semantic information are critical for improving translation quality—and further demonstrates that large gains can be achieved for low-resource languages with different word order than English.

pdf
A Cocktail of Deep Syntactic Features for Hierarchical Machine Translation
Daniel Stein | Stephan Peitz | David Vilar | Hermann Ney

In this work we review and compare three additional syntactic enhancements for the hierarchical phrase-based translation model, which have been presented in the last few years. We compare their performance when applied separately and study whether the combination may yield additional improvements. Our findings show that the models are complementary, and their combination achieve an increase of 1% in BLEU and a reduction of nearly 2% in TER. The models presented in this work are made available as part of the Jane open source machine translation toolkit.

pdf
Using TERp to Augment the System Combination for SMT
Jinhua Du | Andy Way

TER-Plus (TERp) is an extended TER evaluation metric incorporating morphology, synonymy and paraphrases. There are three new edit operations in TERp: Stem Matches, Synonym Matches and Phrase Substitutions (Paraphrases). In this paper, we propose a TERp-based augmented system combination in terms of the backbone selection and consensus decoding network. Combining the new properties of the TERp, we also propose a two-pass decoding strategy for the lattice-based phrase-level confusion network (CN) to generate the final result.The experiments conducted on the NIST2008 Chinese-to-English test set show that our TERp-based augmented system combination framework achieves significant improvements in terms of BLEU and TERp scores compared to the state-of-the-art word-level system combination framework and a TER-based combination strategy.

pdf
f-align: An Open-Source Alignment Tool for LFG f-Structures
Anton Bryl | Josef van Genabith

Lexical-Functional Grammar (LFG) f-structures (Kaplan and Bresnan, 1982) have attracted some attention in recent years as an intermediate data representation for statistical machine translation. So far, however, there are no alignment tools capable of aligning f-structures directly, and plain word alignment is used for this purpose. In this way no use is made of the structural information contained in f-structures. We present the first version of a specialized f-structure alignment open-source software.

pdf
Improved Phrase-based SMT with Syntactic Reordering Patterns Learned from Lattice Scoring
Jie Jiang | Jinhua Du | Andy Way

In this paper, we present a novel approach to incorporate source-side syntactic reordering patterns into phrase-based SMT. The main contribution of this work is to use the lattice scoring approach to exploit and utilize reordering information that is favoured by the baseline PBSMT system. By referring to the parse trees of the training corpus, we represent the observed reorderings with source-side syntactic patterns. The extracted patterns are then used to convert the parsed inputs into word lattices, which contain both the original source sentences and their potential reorderings. Weights of the word lattices are estimated from the observations of the syntactic reordering patterns in the training corpus. Finally, the PBSMT system is tuned and tested on the generated word lattices to show the benefits of adding potential source-side reorderings in the inputs. We confirmed the effectiveness of our proposed method on a medium-sized corpus for Chinese-English machine translation task. Our method outperformed the baseline system by 1.67% relative on a randomly selected testset and 8.56% relative on the NIST 2008 testset in terms of BLEU score.

pdf
Transliterating From All Languages
Ann Irvine | Chris Callison-Burch | Alexandre Klementiev

Much of the previous work on transliteration has depended on resources and attributes specific to particular language pairs. In this work, rather than focus on a single language pair, we create robust models for transliterating from all languages in a large, diverse set to English. We create training data for 150 languages by mining name pairs from Wikipedia. We train 13 systems and analyze the effects of the amount of training data on transliteration performance. We also present an analysis of the types of errors that the systems make. Our analyses are particularly valuable for building machine translation systems for low resource languages, where creating and integrating a transliteration module for a language with few NLP resources may provide substantial gains in translation performance.

pdf
Using Sublexical Translations to Handle the OOV Problem in MT
Chung-chi Huang | Ho-ching Yen | Shih-ting Huang | Jason Chang

We introduce a method for learning to translate out-of-vocabulary (OOV) words. The method focuses on combining sublexical/constituent translations of an OOV to generate its translation candidates. In our approach, wild-card searches are formulated based on our OOV analysis, aimed at maximizing the probability of retrieving OOVs’ sublexical translations from existing resource of machine translation (MT) systems. At run-time, translation candidates of the unknown words are generated from their suitable sublexical translations and ranked based on monolingual and bilingual information. We have incorporated the OOV model into a state-of-the-art MT system and experimental results show that our model indeed helps to ease the negative impact of OOVs on translation quality, especially for sentences containing more OOVs (significant improvement).

pdf
MT-based Sentence Alignment for OCR-generated Parallel Texts
Rico Sennrich | Martin Volk

The performance of current sentence alignment tools varies according to the to-be-aligned texts. We have found existing tools unsuitable for hard-to-align parallel texts and describe an alternative alignment algorithm. The basic idea is to use machine translations of a text and BLEU as a similarity score to find reliable alignments which are used as anchor points. The gaps between these anchor points are then filled using BLEU-based and length-based heuristics. We show that this approach outperforms state-of-the-art algorithms in our alignment task, and that this improvement in alignment quality translates into better SMT performance. Furthermore, we show that even length-based alignment algorithms profit from having a machine translation as a point of comparison.

pdf
Detecting Cross-lingual Semantic Similarity Using Parallel PropBanks
Shumin Wu | Jinho Choi | Martha Palmer

This paper suggests a method for detecting cross-lingual semantic similarity using parallel PropBanks. We begin by improving word alignments for verb predicates generated by GIZA++ by using information available in parallel PropBanks. We applied the Kuhn-Munkres method to measure predicate-argument matching and improved verb predicate alignments by an F-score of 12.6%. Using the enhanced word alignments we checked the set of target verbs aligned to a specific source verb for semantic consistency. For a set of English verbs aligned to a Chinese verb, we checked if the English verbs belong to the same semantic class using an existing lexical database, WordNet. For a set of Chinese verbs aligned to an English verb we manually checked semantic similarity between the Chinese verbs within a set. Our results show that the verb sets we generated have a high correlation with semantic classes. This could potentially lead to an automatic technique for generating semantic classes for verbs.

pdf
Combining Multi-Domain Statistical Machine Translation Models using Automatic Classifiers
Pratyush Banerjee | Jinhua Du | Baoli Li | Sudip Naskar | Andy Way | Josef van Genabith

This paper presents a set of experiments on Domain Adaptation of Statistical Machine Translation systems. The experiments focus on Chinese-English and two domain-specific corpora. The paper presents a novel approach for combining multiple domain-trained translation models to achieve improved translation quality for both domain-specific as well as combined sets of sentences. We train a statistical classifier to classify sentences according to the appropriate domain and utilize the corresponding domain-specific MT models to translate them. Experimental results show that the method achieves a statistically significant absolute improvement of 1.58 BLEU (2.86% relative improvement) score over a translation model trained on combined data, and considerable improvements over a model using multiple decoding paths of the Moses decoder, for the combined domain test set. Furthermore, even for domain-specific test sets, our approach works almost as well as dedicated domain-specific models and perfect classification.

pdf
Using Variable Decoding Weight for Language Model in Statistical Machine Translation
Behrang Mohit | Rebecca Hwa | Alon Lavie

This paper investigates varying the decoder weight of the language model (LM) when translating different parts of a sentence. We determine the condition under which the LM weight should be adapted. We find that a better translation can be achieved by varying the LM weight when decoding the most problematic spot in a sentence, which we refer to as a difficult segment. Two adaptation strategies are proposed and compared through experiments. We find that adapting a different LM weight for every difficult segment resulted in the largest improvement in translation quality.

pdf
Refining Word Alignment with Discriminative Training
Nadi Tomeh | Alexandre Allauzen | François Yvon | Guillaume Wisniewski

The quality of statistical machine translation systems depends on the quality of the word alignments that are computed during the translation model training phase. IBM alignment models, as implemented in the GIZA++ toolkit, constitute the de facto standard for performing these computations. The resulting alignments and translation models are however very noisy, and several authors have tried to improve them. In this work, we propose a simple and effective approach, which considers alignment as a series of independent binary classification problems in the alignment matrix. Through extensive feature engineering and the use of stacking techniques, we were able to obtain alignments much closer to manually defined references than those obtained by the IBM models. These alignments also yield better translation models, delivering improved performance in a large scale Arabic to English translation task.

pdf
Maximizing TM Performance through Sub-Tree Alignment and SMT
Ventsislav Zhechev | Josef van Genabith

With the steadily increasing demand for high-quality translation, the localisation industry is constantly searching for technologies that would increase translator throughput, in particular focusing on the use of high-quality Statistical Machine Translation (SMT) supplementing the established Translation Memory (TM) technology. In this paper, we present a novel modular approach that utilises state-of-the-art sub-tree alignment and SMT techniques to turn the fuzzy matches from a TM into near-perfect translations. Rather than relegate SMT to a last-resort status where it is only used should the TM system fail to produce the desired output, for us SMT is an integral part of the translation process that we rely on to obtain high-quality results. We show that the presented system consistently produces better-quality output than the TM and performs on par or better than the standalone SMT system.

pdf
Choosing the Right Evaluation for Machine Translation: an Examination of Annotator and Automatic Metric Performance on Human Judgment Tasks
Michael Denkowski | Alon Lavie

This paper examines the motivation, design, and practical results of several types of human evaluation tasks for machine translation. In addition to considering annotator performance and task informativeness over multiple evaluations, we explore the practicality of tuning automatic evaluation metrics to each judgment type in a comprehensive experiment using the METEOR-NEXT metric. We present results showing clear advantages of tuning to certain types of judgments and discuss causes of inconsistency when tuning to various judgment data, as well as sources of difficulty in the human evaluation tasks themselves.

pdf
Incremental Re-training for Post-editing SMT
Daniel Hardt | Jakob Elming

A method is presented for incremental re-training of an SMT system, in which a local phrase table is created and incrementally updated as a file is translated and post-edited. It is shown that translation data from within the same file has higher value than other domain-specific data. In two technical domains, within-file data increases BLEU score by several full points. Furthermore, a strong recency effect is documented; nearby data within the file has greater value than more distant data. It is also shown that the value of translation data is strongly correlated with a metric defined over new occurrences of n-grams. Finally, it is argued that the incremental re-training prototype could serve as the basis for a practical system which could be interactively updated in real time in a post-editing setting. Based on the results here, such an interactive system has the potential to dramatically improve translation quality.

pdf
A Source-side Decoding Sequence Model for Statistical Machine Translation
Minwei Feng | Arne Mauser | Hermann Ney

We propose a source-side decoding sequence language model for phrase-based statistical machine translation. This model is a reordering model in the sense that it helps the decoder find the correct decoding sequence. The model uses word-aligned bilingual training data. We show improved translation quality of up to 1.34% BLEU and 0.54% TER using this model compared to three other widely used reordering models.

pdf
Supertags as Source Language Context in Hierarchical Phrase-Based SMT
Rejwanul Haque | Sudip Naskar | Antal van den Bosch | Andy Way

Statistical machine translation (SMT) models have recently begun to include source context modeling, under the assumption that the proper lexical choice of the translation for an ambiguous word can be determined from the context in which it appears. Various types of lexical and syntactic features have been explored as effective source context to improve phrase selection in SMT. In the present work, we introduce lexico-syntactic descriptions in the form of supertags as source-side context features in the state-of-the-art hierarchical phrase-based SMT (HPB) model. These features enable us to exploit source similarity in addition to target similarity, as modelled by the language model. In our experiments two kinds of supertags are employed: those from lexicalized tree-adjoining grammar (LTAG) and combinatory categorial grammar (CCG). We use a memory-based classification framework that enables the efficient estimation of these features. Despite the differences between the two supertagging approaches, they give similar improvements. We evaluate the performance of our approach on an English-to-Dutch translation task, and report statistically significant improvements of 4.48% and 6.3% BLEU scores in translation quality when adding CCG and LTAG supertags, respectively, as context-informed features.

pdf
Translating Structured Documents
George Foster | Pierre Isabelle | Roland Kuhn

Machine Translation traditionally treats documents as sets of independent sentences. In many genres, however, documents are highly structured, and their structure contains information that can be used to improve translation quality. We present a preliminary approach to document translation that uses structural features to modify the behaviour of a language model, at sentence-level granularity. To our knowledge, this is the first attempt to incorporate structural information into statistical MT. In experiments on structured English/French documents from the Hansard corpus, we demonstrate small but statistically significant improvements.

pdf
Extending the Hierarchical Phrase Based Model with Maximum Entropy Based BTG
Zhongjun He | Yao Meng | Hao Yu

In the hierarchical phrase based (HPB) translation model, in addition to hierarchical phrase pairs extracted from bi-text, glue rules are used to perform serial combination of phrases. However, this basic method for combining phrases is not sufficient for phrase reordering. In this paper, we extend the HPB model with maximum entropy based bracketing transduction grammar (BTG), which provides content-dependent combination of neighboring phrases in two ways: serial or inverse. Experimental results show that the extended HPB system achieves absolute improvements of 0.9∼1.8 BLEU points over the baseline for large-scale translation tasks.

pdf
Transferring Syntactic Relations of Subject-Verb-Object Pattern in Chinese-to-Korean SMT
Jin-Ji Li | Jungi Kim | Jong-Hyeok Lee

Since most Korean postpositions signal grammatical functions such as syntactic relations, generation of incorrect Korean post-positions results in producing ungrammatical outputs in machine translations targeting Korean. Chinese and Korean belong to morphosyntactically divergent language pairs, and usually Korean postpositions do not have their counterparts in Chinese. In this paper, we propose a preprocessing method for a statistical MT system that generates more adequate Korean postpositions. We transfer syntactic relations of subject-verb-object patterns in Chinese sentences and enrich them with transferred syntactic relations in order to reduce the morpho-syntactic differences. The effectiveness of our proposed method is measured with lexical units of various granularities. Human evaluation also suggest improvements over previous methods, which are consistent with the result of the automatic evaluation.

pdf
Improving the Post-Editing Experience using Translation Recommendation: A User Study
Yifan He | Yanjun Ma | Johann Roturier | Andy Way | Josef van Genabith

We report findings from a user study with professional post-editors using a translation recommendation framework (He et al., 2010) to integrate Statistical Machine Translation (SMT) output with Translation Memory (TM) systems. The framework recommends SMT outputs to a TM user when it predicts that SMT outputs are more suitable for post-editing than the hits provided by the TM. We analyze the effectiveness of the model as well as the reaction of potential users. Based on the performance statistics and the users’ comments, we find that translation recommendation can reduce the workload of professional post-editors and improve the acceptance of MT in the localization industry.

pdf
Accuracy-Based Scoring for Phrase-Based Statistical Machine Translation
Sergio Penkale | Yanjun May | Daniel Galron | Andy Way

Although the scoring features of state-of-the-art Phrase-Based Statistical Machine Translation (PB-SMT) models are weighted so as to optimise an objective function measuring translation quality, the estimation of the features themselves does not have any relation to such quality metrics. In this paper, we introduce a translation quality-based feature to PB-SMT in a bid to improve the translation quality of the system. Our feature is estimated by averaging the edit-distance between phrase pairs involved in the translation of oracle sentences, chosen by automatic evaluation metrics from the N-best outputs of a baseline system, and phrase pairs occurring in the N-best list. Using our method, we report a statistically significant 2.11% relative improvement in BLEU score for the WMT 2009 Spanish-to-English translation task. We also report that using our method we can achieve statistically significant improvements over the baseline using many other MT evaluation metrics, and a substantial increase in speed and reduction in memory use (due to a reduction in phrase-table size of 87%) while maintaining significant gains in translation quality.

pdf
Improving Reordering in Statistical Machine Translation from Farsi
Evgeny Matusov | Selçuk Köprü

In this paper, we propose a novel model for scoring reordering in phrase-based statistical machine translation (SMT) and successfully use it for translation from Farsi into English and Arabic. The model replaces the distance-based distortion model that is widely used in most SMT systems. The main idea of the model is to penalize each new deviation from the monotonic translation path. We also propose a way for combining this model with manually created reordering rules for Farsi which try to alleviate the difference in sentence structure between Farsi and English/Arabic by changing the position of the verb. The rules are used in the SMT search as soft constraints. In the experiments on two general-domain translation tasks, the proposed penalty-based model improves the BLEU score by up to 1.5% absolute as compared to the baseline of monotonic translation, and up to 1.2% as compared to using the distance-based distortion model.

pdf
Chinese Syntactic Reordering through Contrastive Analysis of Predicate-predicate Patterns in Chinese-to-Korean SMT
Jin-Ji Li | Jungi Kim | Jong-Hyeok Lee

We propose a Chinese dependency tree reordering method for Chinese-to-Korean SMT systems through analyzing systematic differences between the Chinese and Korean languages. Translating predicate-predicate patterns in Chinese into Korean raises various issues such as long-distance reordering. This paper concentrates on syntactic reordering of predicate-predicate patterns in Chinese dependency trees through contrastively analyzing construction types in Chinese and their corresponding translations in Korean. We explore useful linguistic knowledge that assists effective syntactic reordering of Chinese dependency trees; we design two experiments with different kinds of linguistic knowledge combined with the phrase and hierarchical phrase-based SMT systems, and assess the effectiveness of our proposed methods. The experiments achieved significant improvements by resolving the long-distance reordering problem.

pdf
Machine Translation Using Overlapping Alignments and SampleRank
Benjamin Roth | Andrew McCallum | Marc Dymetman | Nicola Cancedda

We present a conditional-random-field approach to discriminatively-trained phrase-based machine translation in which training and decoding are both cast in a sampling framework and are implemented uniformly in a new probabilistic programming language for factor graphs. In traditional phrase-based translation, decoding infers both a "Viterbi" alignment and the target sentence. In contrast, in our approach, a rich overlapping-phrase alignment is produced by a fast deterministic method, while probabilistic decoding infers only the target sentence, which is then able to leverage arbitrary features of the entire source sentence, target sentence and alignment. By using SampleRank for learning we could in principle efficiently estimate hundreds of thousands of parameters. Test-time decoding is done by MCMC sampling with annealing. To demonstrate the potential of our approach we show preliminary experiments leveraging alignments that may contain overlapping bi-phrases.

pdf
A Comparison of Various Types of Extended Lexicon Models for Statistical Machine Translation
Matthias Huck | Martin Ratajczak | Patrick Lehnen | Hermann Ney

In this work we give a detailed comparison of the impact of the integration of discriminative and trigger-based lexicon models in state-of-the-art hierarchical and conventional phrase-based statistical machine translation systems. As both types of extended lexicon models can grow very large, we apply certain restrictions to discard some of the less useful information. We show how these restrictions facilitate the training of the extended lexicon models. We finally evaluate systems that incorporate both types of models with different restrictions on a large-scale translation task for the Arabic-English language pair. Our results suggest that extended lexicon models can be substantially reduced in size while still giving clear improvements in translation performance.

pdf
A Discriminative Lexicon Model for Complex Morphology
Minwoo Jeong | Kristina Toutanova | Hisami Suzuki | Chris Quirk

This paper describes successful applications of discriminative lexicon models to the statistical machine translation (SMT) systems into morphologically complex languages. We extend the previous work on discriminatively trained lexicon models to include more contextual information in making lexical selection decisions by building a single global log-linear model of translation selection. In offline experiments, we show that the use of the expanded contextual information, including morphological and syntactic features, help better predict words in three target languages with complex morphology (Bulgarian, Czech and Korean). We also show that these improved lexical prediction models make a positive impact in the end-to-end SMT scenario from English to these languages.

pdf
Voting on N-grams for Machine Translation System Combination
Kenneth Heafield | Alon Lavie

System combination exploits differences between machine translation systems to form a combined translation from several system outputs. Core to this process are features that reward n-gram matches between a candidate combination and each system output. Systems differ in performance at the n-gram level despite similar overall scores. We therefore advocate a new feature formulation: for each system and each small n, a feature counts n-gram matches between the system and candidate. We show post-evaluation improvement of 6.67 BLEU over the best system on NIST MT09 Arabic-English test data. Compared to a baseline system combination scheme from WMT 2009, we show improvement in the range of 1 BLEU point.

pdf
Improved Statistical Machine Translation with Hybrid Phrasal Paraphrases Derived from Monolingual Text and a Shallow Lexical Resource
Yuval Marton

Paraphrase generation is useful for various NLP tasks. But pivoting techniques for paraphrasing have limited applicability due to their reliance on parallel texts, although they benefit from linguistic knowledge implicit in the sentence alignment. Distributional paraphrasing has wider applicability, but doesn’t benefit from any linguistic knowledge. We combine a distributional semantic distance measure (based on a non-annotated corpus) with a shallow linguistic resource to create a hybrid semantic distance measure of words, which we extend to phrases. We embed this extended hybrid measure in a distributional paraphrasing technique, benefiting from both linguistic knowledge and independence from parallel texts. Evaluated in statistical machine translation tasks by augmenting translation models with paraphrase-based translation rules, we show our novel technique is superior to the non-augmented baseline and both the distributional and pivot paraphrasing techniques. We train models on both a full-size dataset as well as a simulated “low density” small dataset.

up

bib (full) Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Student Research Workshop

pdf bib
Statistical Machine Translation of English-Manipuri using Morpho-syntactic and Semantic Information
Thoudam Doren Singh | Savaji Bandyopadhyay

English-Manipuri language pair is one of the rarely investigated with restricted bilingual resources. The development of a factored Statistical Machine Translation (SMT) system between English as source and Manipuri, a morphologically rich language as target is reported. The role of the suffixes and dependency relations on the source side and case markers on the target side are identified as important translation factors. The morphology and dependency relations play important roles to improve the translation quality. A parallel corpus of 10350 sentences from news domain is used for training and the system is tested with 500 sentences. Using the proposed translation factors, the output of the translation quality is improved as indicated by the BLEU score and subjective evaluation.

pdf bib
A Synchronous Context Free Grammar using Dependency Sequence for Syntax-based Statistical Machine Translation
Hwidong Na | Jin-Ji Li | Yeha Lee | Jong-hyeok Lee

We introduce a novel translation rule that captures discontinuous, partial constituent, and non-projective phrases from source language. Using the traversal order sequences of the dependency tree, our proposed method 1) extracts the synchronous rules in linear time and 2) combines them efficiently using the CYK chart parsing algorithm. We analytically show the effectiveness of this translation rule in translating relatively free order sentences, and empirically investigate the coverage of our proposed method.

pdf bib
Using Synonyms for Arabic-to-English Example-Based Translation
Kfir Bar | Nachum Dershowitz

An implementation of a non-structural Example-Based Machine Translation system that translates sentences from Arabic to English, using a parallel corpus aligned at the sentence level, is described. Source-language synonyms were derived automatically and used to help locate potential translation examples for fragments of a given input sentence. The smaller the parallel corpus, the greater the contribution provided by synonyms. Considering the degree of relevance of the subject matter of a potential match contributes to the quality of the final results.

pdf
Machine Translation between Hebrew and Arabic: Needs, Challenges and Preliminary Solutions
Reshef Shilon | Nizar Habash | Alon Lavie | Shuly Wintner

Hebrew and Arabic are related but mutually incomprehensible languages with complex morphology and scarce parallel corpora. Machine translation between the two languages is therefore interesting and challenging. We discuss similarities and differences between Hebrew and Arabic, the benefits and challenges that they induce, respectively, and their implications for machine translation. We highlight the shortcomings of using English as a pivot language and advocate a direct, transfer-based and linguistically-informed (but still statistical, and hence scalable) approach. We report preliminary results of such a system that we are currently developing.

up

bib (full) Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Commercial MT User Program

pdf bib
Post-Editing Free Machine Translation: From a Language Vendor’s Perspective
Luciana Ramos

This paper presents a language vendor's perspective on the actual implementation of machine translation solutions in the translation/localization process. This lecture will be delivered at AMTA-2010 Conference, and a short video will accompany lecturer's speech.

pdf bib
Practical uses of MT at Global Language Translations and Consulting: A case study of MT use for profit
Doug Strock

This document describes the use of MT at GLTaC and provides an approach to determining if offering MT services is right for you. There is no single answer or approach to providing MT services so this is just one way an LSP has chosen to provide MT services.

pdf bib
MT in the Enterprise Environment
John Dixon

This paper aims to give an insight into some of the challenges and opportunities from implementing machine translation in an enterprise environment. This is written from a business perspective rather than a technical one and highlights how Applied Language Solutions has designed and rolled out a series of customer specific machine translation solutions within our Enterprise.

pdf
PangeaMT - putting open standards to work... well
E. Yuste | M. Herranz | A-L. Lagarda | L. Tarazón | I. Sánchez-Cortina | F. Casacuberta

PangeaMT is presented from our standpoint as a LSP keen to develop and implement a cost-effective translation automation strategy that is also in line with our full commitment to open standards. Moses lies at the very core of PangeaMT but we have built several pre-/post-processing modules around it, from word reordering to inline mark-up parser to TMX/XLIFF filters. These represent interesting breakthroughs in real-world, customized SMT applications.

pdf
One technology, many solutions: MT at Adobe
Raymond Flournoy | Jeff Rueppel

Over the last two years, Adobe Systems has incorporated Machine Translation with post-editing into the localization workflow. Currently, the number of products using MT for localization has grown to over a dozen, and the number of languages covered is now five. Adobe is continuing to expand the number of products which use MT for localization, and is also looking beyond localization to other applications of MT technology. In this paper, we discuss some of our further use cases, and the varying requirements each use case has for quality, customization, cost, and other factors. Based on those varying requirements, we consider a range of MT solutions beyond our current model of licensed, customized commercial engines.

pdf
Evaluating vendors for MT and post-editing at Avaya
Barbara Scott | Adriana Beaton

Avaya identified machine translation and post-editing as the next step in their strategy for global information management to deliver against the ever-present business objectives of “Increased Efficiency and Additional Localized Content”. Avaya shares how they assessed the market and selected their chosen vendor.

pdf
Scenarios for Customizing an SMT Engine Based on Availability of Data
Kirti Vashee | Rustin Gibbs

Although still in a nascent state as a professional translation tool, customized SMT engines already have multiple applications, each of which require clear definitions about quality and productivity. Three engine-training scenarios have emerged which are representative of real-world applications for the development and use of a customized SMT engines based on the availability of data. In the case that limited or no bilingual training data is available, a unique development process can be used to harvest and translate n-grams directly. Using this approach Asia Online and Moravia IT have successfully customized SMT engines for use in various domains. A partnership between an MT engine provider and a qualified LSP is essential to deliver quality results using this approach.

pdf
Content Quality for Better MT: A Practical Guide to Quality at the Source
Jennifer Beaupre | Kent Taylor

With pressure to offer content in many languages, many companies are considering machine translation for faster delivery and lower translation costs, yet MT is notorious for poor quality translation. How can you improve your content quality to make MT work for you? High quality source content eliminates many of the common roadblocks for using machine translation effectively. In this presentation, Jennifer Beaupre, Marketing Director and Kent Taylor, GM, acrolinx, will review what best practices have taught us about these topics: 1 Why is source content important when using machine translation? 2 How does source content affect translation costs? 3 How can source content improve the quality of MT output?

pdf
Using Machine Translation for the Localization of Electronic Support Content: Evaluating End-User Satisfaction
Osamuyimen Stewart | David Lubensky | Scott Macdonald | Julie Marcotte

This paper discusses how to measure the impact of online content localized by machine translation in meeting the business need of commercial users, i.e., reducing the volume of telephone calls to the Call Center (call deflection). We address various design, conceptual and practical issues encountered in proving the value of machine translation and conclude that the approach that will give the best result is one that reconciles end-user (human evaluation) feedback with web and Call Center data.

pdf
Better translations with user collaboration – Integrated MT at Microsoft
Chris Wendt

This paper outlines the methodologies Microsoft has deployed for seamless integration of human translation into the translation workflow, and describes a variety of methods to gather and collect human translation data. Increased amounts of parallel training data help to enhance the translation quality of the statistical MT system in use at Microsoft. The presentation covers the theory, the technical methodology as well as the experiences Microsoft has with the implementation, and practical use of such a system. Included is a discussion of the factors influencing the translation quality of a statistical MT system, a short description of the feedback collection mechanism in use at Microsoft, and the metrics it observed on its MT deployments.

pdf
Where can MT be most successful and what are the best MT engines for various languages?
Jenny Lu

CA’s globalization team has a long term goal of reaching fully loaded costs of 10 cents per word. Fully loaded costs include the costs incurred for translation, localization QA, engineering, project management, and overall management. While translation budgets are gradually decreasing and volumes increasing, machine translation becomes an alternative source to produce more with less. This paper describes how CA Technologies tries to accomplish this long term goal with the deployment of MT systems to increase productivity with less cost, in a relatively short time.

pdf
The “Moses for Localization” Open Source Project
Achim Ruopp

The open source statistical machine translation toolkit Moses has recently drawn a lot of attention in the localization industry. Companies see the chance to use Moses to leverage their existing translation assets and integrate MT into their localization processes. Due to the academic origins of Moses there are some obstacles to overcome when using it in an industry setting. In this paper we discuss what these obstacles are and how they are addressed by the newly established Moses for Localization open source project. We describe the different components of the project and the benefits a company can gain from using this open source project.

pdf
Effective MT within a Translation Workflow Panopticon
Sven Andrä | Jorg Shütz

In this presentation, we focus on integrating machine translation (MT) into an existing corporate localization and translation workflow. This MT extended workflow includes a customized post-editing sub-workflow together with crowdsourced, incentives based translation evaluation feedback routines that enable automated learning processes. The core of the implementation is a semantic repository that comprises the necessary information artifacts and links to language resources to organize, manage and monitor the different human and machine roles, tasks, and the entire lifecylce of the localization and translation supply chain(s).

pdf
PLuTO: MT for On-Line Patent Translation
John Tinsley | Andy Way | Páraic Sheridan

PLuTO – Patent Language Translation Online – is a partially EU-funded commercialization project which specializes in the automatic retrieval and translation of patent documents. At the core of the PLuTO framework is a machine translation (MT) engine through which web-based translation services are offered. The fully integrated PLuTO architecture includes a translation engine coupling MT with translation memories (TM), and a patent search and retrieval engine. In this paper, we first describe the motivating factors behind the provision of such a service. Following this, we give an overview of the PLuTO framework as a whole, with particular emphasis on the MT components, and provide a real world use case scenario in which PLuTO MT services are ex- ploited.

pdf
Sharing the Continental Airlines and SDL Post-Editing Experience
Adriana Beaton | Gabriela Contreras

This paper highlights the results and trends on post-editing and machine translation from the recent AMTA and SDL Automated Translation Survey. Then Continental Airlines and SDL share their experiences, and the benefits and challenges of human post-editing.

pdf
ProMT at PayPal: Enterprise-scale MT for financial industry content
Olga Beregovaya | Alex Yanishevsky

This paper describes PROMT system deployment at PayPal including: PayPal localization process challenges and requirements to a machine translation solution; Technical specifications of PROMT Translation Server Developer Edition; Linguistic customization performed by PROMT team for PayPal; Engineering Customization performed by PROMT team for PayPal; Additional customized development performed by PROMT team on behalf of PayPal; PROMT engine and PayPal productivity gains and cost savings.


Trusted Translations Deliver Compelling Results for the Travel Industry
Daniel Marcu


up

bib (full) Proceedings of the 9th Conference of the Association for Machine Translation in the Americas: Government MT User Program

pdf bib
Paralinguist Assessment Decision Factors For Machine Translation Output: A Case Study
Carol Van Ess-Dykema | Jocelyn Phillips | Florence Reeder | Laurie Gerber

We describe a case study that presents a framework for examining whether Machine Translation (MT) output enables translation professionals to translate faster while at the same time producing better quality translations than without MT output. We seek to find decision factors that enable a translation professional, known as a Paralinguist, to determine whether MT output is of sufficient quality to serve as a “seed translation” for post-editors. The decision factors, unlike MT developers’ automatic metrics, must function without a reference translation. We also examine the correlation of MT developers’ automatic metrics with error annotators’ assessments of post-edited translations.

bib
Utilizing Automated Translation with Quality Scores to Increase Productivity
Daniel Marcu | Kathleen Egan | Chuck Simmons | Ning-Ning Mahlmann

Automated translation can assist with a variety of translation needs in government, from speeding up access to information for intelligence work to helping human translators increase their productivity. However, government entities need to have a mechanism in place so that they know whether or not they can trust the output from automated translation solutions. In this presentation, Language Weaver will present a new capability "TrustScore": an automated scoring algorithm that communicates how good the automated translation is, using a meaningful metric. With this capability, each translation is automatically assigned a score from 1 to 5 in the TrustScore. A score of 1 would indicate that the translation is unintelligible; a score of 3 would indicate that meaning has been conveyed and that the translated content is actionable. A score approaching 4 or higher would indicate that meaning and nuance have been carried through. This automatic prediction of quality has been validated by testing done across significant numbers of data points in different companies and on different types of content. After outlining TrustScore, and how it works, Language Weaver will discuss how a scoring mechanism like TrustScore could be used in a translation productivity workflow in government to assist linguists with day to day translation work. This would enable them to further benefit from their investments in automated translation software. Language Weaver would also share how TrustScore is used in commercial deployments to cost effectively publish information in near real time.

bib
Machine translation from English to Chinese: A study of Google’s performance with the UN documents
Li Zuo

The present study examines from users' perspective the performance of Google's online translation service on the documents of the United Nations. Since at least 2004, United Nations has been exploring, piloting, and implementing computer assisted translation (CAT) with Trados as an officially selected vehicle. A more recent development is the spontaneous adoption of Google translation among Chinese translators as an easy, versatile, and labor-saving tool. With machine translation getting real among developers and end-users, there seems to be a need to conduct a reality check to see how well it serves its purpose. The current study examines Google translation and its degree of assistance to the Chinese professional translators at the United Nations in particular. It uses a variety of UN documents to test and evaluate the performance of Google translation from English to Chinese. The sampled UN documents consist of 3 resolutions, 2 letters, 2 provisional agendas, 1 plenary verbatim, 1 report, 1 note by the Secretariat, and 1 budget. The results vindicate Google's cutting edge in machine translation when English to Chinese is concerned, thanks to its powerful infrastructure and immense translation database. The conversion between the two languages takes only an instant, even for a fairly long piece. On top of that, Google gets terminology right more frequently and seems better able to make an intelligent guess when compared with other translation tools like MS Bing. But Google's Chinese is far from intelligible, especially at the sentence level, primarily because of serious problems with word order and sentence parsing. There are also technical problems like adding or omitting words and erroneous rendering of numbers. Nevertheless, Google translation offers translators an option to work on its rough draft for the benefit of saving time and pain in typing. The challenges of post-editing, however, may offset the time saved. Even though Google translation may not necessarily net in speed gains when it is used to assist translation, it certainly is a beneficial labor saver, including mental labor when it performs at its very best.


Foreign Media Collaboration Framework (FMCF)
Chuck Simmons

The Foreign Media Collaboration Framework (FMCF) is the latest approach by NASIC to provide a comprehensive system to process foreign language materials. FMCF is a Services Oriented Architecture (SOA) that provides an infrastructure to manage HLT tools, products, workflows, and services. This federated SOA solution adheres to DISA's NCES SOA Governance Model, DDMS XML for Metadata Capture/Dissemination, and IC-ISM for Security. The FMCF provides a cutting edge infrastructure that encapsulates multiple capabilities from multiple vendors in one place. This approach will accelerate HLT development, contain sustainment cost, minimize training, and brings the MT, OCR, ASR, audio/video, entity extraction, analytic tools and database under one umbrella, thus reducing the total cost of ownership.


Cross Lingual Arabic Blog Alerting (COLABA)
Kathleen Egan

Social media and tools for communication over the Internet have expanded a great deal in recent years. This expansion offers a diverse set of users a means to communicate more freely and spontaneously in mixed languages and genres (blogs, message boards, chat, texting, video and images). Dialectal Arabic is pervasive in written social media, however current state of the art tools made for Modern Standard Arabic (MSA) fail on Arabic dialects. COLABA enables MSA users to interpret dialects correctly. It helps find Arabic colloquial content that is currently not easily searchable and accessible to MSA queries. The COLABA team has built a suite of tools that will offer users the ability to anonymously capture online unstructured media content from blogs to comprehend, organize, and validate content from informal and colloquial genres of online communication in MSA and a variety of Arabic dialects. The DoD/Combating Terrorism Technical Support Office/Technical Support Working Group (CTTSO/TSWG) awarded the contract to Acxiom Corporation and partners from MTI/IBM, Columbia University, Janya and Wichita State University to bring joint expertise to address this challenge. The suite has several use applications: Support for language and cultural learning by making colloquial Arabic intelligible to students of MSA; Retrieval and prioritization for triage and content analysis by finding Arabic colloquial and dialect terms that today's search engines miss; by providing appropriate interpretations of colloquial Arabic, which is opaque to current analytics approaches; and by Identify named entities, events, topics, and sentiment. Enabling improved translations by MSA-trained MT systems through decreases in out-of-vocabulary terms achieved by means of colloquial term conversion to MSA.


Pre-editing for Machine Translation
Weimin Jiang

It is common practice that linguists will do MT post-editing to improve translation accuracy and fluency. This presentation however, examines the importance of pre-editing source material to improve MT. Even when a digital source file which is literally correct is used for MT, there are still some factors that have significant effect on MT translation accuracy and fluency. Based on 35 examples from more than 20 professional journals and websites, this article is about an experiment of pre-editing source material for Chinese-English MT in the S and T domain. Pertinent examples are selected to illustrate how machine translation accuracy and fluency can be enhanced by pre-editing which includes the following four areas: to provide a straightforward sentence structure, to improve punctuation, to use straightforward wording, and to eliminate redundancy and superfluous elements.


Multi-Language Desktop Suite
Brian Roberson

Professional language analysts leverage a myriad of tools in their quest to produce accurate translations of foreign language material. The effectiveness of these tools ultimately affects resource allocation, information dissemination and subsequent follow-on mission planning; all three of which are vital, time-critical components in the intelligence cycle. This presentation will highlight the need for interactive tools that perform jointly in an operational environment, focusing on a dynamic suite of foreign language tools packaged into a desktop application and serving in a machine translation role. Basis Technology's Arabic/Afghan Desktop Suite (ADS) supports DOMEX, CELLEX, and HUMINT missions while being the most powerful Arabic, Dari and Pushto text analytic and processing software available. The ADS translates large scale lists of names from foreign language to English and also pinpoints place names appearing in reports with their coordinate locations on maps. With standardization output having to be more accurate than ever, the ADS ensures conformance with USG transliteration standards for Arabic script languages, including IC, BGN/PCGN, SATTS and MELTS. The ADS enables optimization of your limited resources and allows your analysts and linguists to be tasked more efficiently throughout the workflow process.


User-generated System for Critical Document Triage and Exploitation–Version 2011
Kristen Summers | Hassan Sawaf

CACI has developed and delivered systems for document exploitation and processing to Government customers around the world. Many of these systems include advanced language processing capabilities in order to enable rapid triage of vast collections of foreign language documents, separating the content that requires immediate human attention from the less immediately pressing material. AppTek provides key patent-pending Machine Translation technology for this critical process, rendering material in Arabic, Farsi and other languages into an English rendition that enables both further automated processing and rapid review by monolingual analysts, to identify the documents that require immediate linguist attention. Both CACI and AppTek have been working with customers to develop capabilities that enable them, the users, to be the ones in command of making their systems learn and continuously improve. We will describe how we put this critical user requirement into the systems and the key role that the user's involvement played in this. We will also discuss some of the key components of the system and what the customer-centric evolution of the system will be, including our document translation workflow, the machine translation technology within it, and our approaches to supporting the technology and sustaining its success designed around adapting to user needs.


Task-based evaluation methods for machine translation, in practice and theory
Judith L. Klavans

A panel of industry and government experts will discuss ways in which they have applied task-based evaluation for Machine Translation and other language technologies in their organizations and share ideas for new methods that could be tried in the future. As part of the discussion, the panelists will address some of the following points: What task-based evaluation means within their organization, i.e., how task-based evaluation is defined; How task-based evaluation impacts the use of MT technologies in their work environment; Whether task-based evaluation correlates with MT developers' automated metrics and if not, how do we arrive at automated metrics that do correlate with the more expensive task-based evaluation; What "lessons-learned" resulted from the course of performing task-based evaluation; How task-based evaluations can be generalized to multiple workflow environments.


Exploring the AFPAK Web
Rod Holland

In spite of low literacy levels in Afghanistan and the Tribal Areas of Pakistan, the Pashto and Dari regions of the World Wide Web manifest diverse content from authors with a broad range of viewpoints. We have used cross-language information retrieval (CLIR) with machine translation to explore this content, and present an informal study of the principal genres that we have encountered. The suitability and limitations of existing machine translation packages for these languages for the exploitation of this content is discussed.


Terminology Management for Web Monitoring
Sean Colbath

Current state-of-the-art in speech recognition, machine translation, and natural language processing (NLP) technologies has allowed the development of powerful media monitoring systems that provide today's analysts with automatic tools for ingesting and searching through different types of data, such as broadcast video, web pages, documents, and scanned images. However the core human-language technologies (HLT) in these media monitoring systems are static learners, which mean that they learn from a pool of labeled data and apply the induced knowledge to operational data in the field. To enable successful and widespread deployment and adoption of HLT, these technologies need to be able to adapt effectively to new operational domains on demand. To provide the US Government analyst with dynamic tools that adapt to these changing domains, these HLT systems must support customizable lexicons. However, the lexicon customization capability in HLT systems presents another unique challenge especially in the context of multiple users of typical media monitoring system installations in the field. Lexicon customization requests from multiple users can be quite extensive, and may conflict in orthographic representation (spelling, transliteration, or stylistic consistency) or in overall meaning. To protect against spurious and inconsistent updates to the system, the media monitoring systems need to support a central terminology management capability to collect, manage, and execute customization requests across multiple users of the system. In this talk, we will describe the integration of a user-driven lexicon/dictionary customization and terminology management capability in the context of the Raytheon BBN Web Monitoring System (WMS) to allow intelligence analysts to update the Machine Translation (MT) system in the WMS with domain- and mission-specific source-to-English phrase translation rules. The Language Learning Broker (LLB) tool from the Technology Development Group (TDG) is a distributed system that supports dictionary/terminology management, personalized dictionaries, and a workflow between linguists and linguist management. LLB is integrated with the WMS to provide a terminology management capability for users to submit, review, validate, and manage customizations of the MT system through the WMS User Interface (UI). We will also describe an ongoing experiment to measure the effectiveness of this user-driven customization capability, in terms of increased translation utility, through a controlled experiment conducted with the help of intelligence analysts.


Use of HLT tools within the US Government
Nicholas Bemish

In today's post 9/11 world, the need for qualified linguists to process all the foreign language materials that are collected/confiscated overseas and at home has grown considerably. To date, a gap exists in the number of linguists needed to process all this material. To fill this gap, the government has invested in the research, development and implementation of Human Language Technologies into the linguist workflow. Most of the current DOMEX workflows incorporate HLT tools, whether that is Machine Translation, Named Entity Extraction, Name Normalization or Transliteration tools. These tools aid the linguists in processing and translating DOMEX material, cutting back on the amount of time needed to sift through all the material. In addition to the technologies used in workflow processes, we have also implemented tools for intelligence analysts, such as the Broadcast Monitoring System and Tripwire. These tools allow non-language qualified analysts to search through foreign language material and exploit that material for intelligence value. These tools implement such technologies as Speech-to-text and machine translation. Part of this effort to fill the gap in the ability to process all this information has been collaboration amongst the members of the Intelligence Community on the research and development of tools. This type of engagement allows the government to save time and money in eliminating the duplication of efforts and allows government agencies to share their ideas and expertise. Our presentation will address some of the tools that are currently in use throughout DoD; being considered for use; some of the challenges we face; and how we are making best use of the HLT development and research that is supporting our needs.


WeBiText: Multilingual Concordancer Built from Public High Quality Web Content
Alain Désilets

In this paper, we describe WeBiText (www.webitext.ca) and how it is being used. WeBiText is a concordancer that allows translators to search in large, high-quality multilingual web sites, in order to find solutions to translation problems. After a quick overview of the system, we present results from an analysis of its logs, which provides a picture of how the tool is being used and how well it performs. We show that it is mostly used to find solutions for short, two or three word translation problems. The system produces at least one hit for 58% of the queries, and hits from at least five different web pages in 41% of cases. We show that 36% of the queries correspond to specialized language problems, which is much higher than what was previously reported for a similar concordancer based on the Canadian Hansard (TransSearch). We also provide a back of the envelope calculation of the current economic impact of the tool, which we estimate at $1 million per year, and growing rapidly.


Data Preparation for Machine Translation Customization
Stacey Bailey

The presentation will focus on ongoing work to develop sentence-aligned Chinese-English data for machine translation customization. Fully automatic alignment produces noisy data (e.g., containing OCR and alignment errors), and we are looking at the question of just how noisy noisy data can be and still produce translation improvements. Related, data clean-up efforts are time- and labor-intensive and we are examining whether translation improvements justify the clean-up costs.


Language NOW
Michael Ladwig

Language Now is a natural language processing (NLP) research and development program with a goal of improving the performance of machine translation (MT) and other NLP technologies in mission-critical applications. The Language NOW research and development program has produced the following four primary advances as Government license-free technology: 1) A consistent and simple user interface developed to allow non-technical users, regardless of language proficiency, to use NLP technology in exploiting foreign language text content. Language NOW research has produced first-of-a-kind capabilities such as detection and handling of structured data, direct processing and visualization of foreign language data with transliterations and translations. 2) A highly efficient NLP integration framework, the Abstract Scalable Language Services (ASLS). ASLS offers system developers easy implementation of an efficient integrated service oriented architecture suitable for devices ranging from handheld computers to large enterprise computer clusters. 3) Service wrappers integrating commercial, Government license-free, open source and research software that provide NLP services such as machine translation, named entity recognition, optical character recognition (OCR), transliteration and text search. 4) STatistical Engines for Language Analysis (STELAE) and Maximum Entropy Extraction Pipeline (MEEP) tools that produce customized statistical machine translation and hybrid statistical/rule-based named entity recognition engines.


The Challenges of Distributed Parallel Corpora
Mike O’Malley

Parallel corpora have traditionally been created, maintained and disseminated by translators and analysts addressing specific domains. They grow by aggregation, individual contributions taking residence in the knowledge base. While the provenance of these new terms is known, their validity is not; they must be vetted by domain and language experts in order to be considered for use in the translation process. In order to address the evolving ecosphere surrounding parallel corpora, developers and analysts need to move beyond the data limitations of the static model. This traditional model does not fully take advantage of new infiltration and exfiltration datapaths available in today's world of distributed knowledge bases. Incoming data are no longer simply textual-audio, imagery and video are all critical components in corpora utility. Corpora maintainers have access to these media types through a variety of data sources, such as automated media monitoring services, the output of any number of translation environments, and translation memory exchanges (TMXs) developed by domain and language experts. These input opportunities are often pre-vetted and ready for automated inclusion into the parallel corpora; their content should not be reduced to the strictly textual. Unfortunately, the quality of the automated alignment and segmentation systems used in these automated systems remains a concern for the bulk preprocessing needed for downstream systems. These data sources share a common characteristic, that of known provenance. They are typically a vetted source and a regular provider to the parallel corpora, whether via daily newscasts or other means. Other data sources are distributed in nature and thus offer distinct challenges to the collection, vetting and exploitation processes. One of the most exciting of such an infiltration path is crowdsourcing. A next-generation parallel corpora management system must be capable of, if not necessarily automatically incorporating crowdsourced terminology as a vetted source, facilitating manual inclusion of vetted crowdsourced terminology. This terminology may be submitted in any scale from practically any source. It may overlap or be contradictory - it almost certainly will require some degree of analysis and evaluation before inclusion. Fortunately, statistical analysis techniques are available to mitigate these concerns. One significant benefit to a crowdsourcing approach is the gains in alignment and segmentation accuracy over similar products offered by the automated systems mentioned above. Given the scalability of crowdsourcing methods, it is certainly a viable framework for bulk alignment and segmentation. Another consideration for the development of distributed parallel corpora systems is their position in the translation workflow. The outputs and exfiltration paths of such a system can be as used for such diverse purposes as addition to existing TMXs, refinement of existing MT applications (through either improvement of their learning processes or inclusion of parallel-corpora generated domain-specific lexicons), creation of sentence pairs and other products for language learning system (LLS) systems, and support for exemplar language clips such as those developed by the State Department.


Translation of Chinese Entities in Russian Text
William McIntyre

This briefing addresses the development of a conversion table that will enable a translator to render Chinese names, locations, and nomenclature into proper Pinyin. As a rule, Russian Machine Translation is a robust system that provides good results. It is a mature system with extensive glossaries and can be useful for translating documents across many disciplines. However, as a result of the transliteration process, Russian MT will not convert Chinese terms from Russian into the Pinyin standard. This standard is used by most databases and the internet. Currently the MT software is performing as it was designed, but this problem impacts the accuracy of the MT making it almost useless for many purposes including data retrieval.

pdf
Parallel Corpus Development at NVTC
Carol Van Ess-Dykema | Laurie Gerber

In this paper, we describe the methods used to develop an exchangeable translation memory bank of sentence-aligned Mandarin Chinese - English sentences. This effort is part of a larger effort, initiated by the National Virtual Translation Center (NVTC), to foster collaboration and sharing of translation memory banks across the Intelligence Community and the Department of Defense. In this paper, we describe our corpus creation process - a largely automated process - highlighting the human interventions that are still deemed necessary. We conclude with a brief discussion of how this work will affect plans for NVTC's new translation management workflow and future research to increase the performance of the automated components of the corpus creation process.



up

bib (full) Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective

pdf bib
Proceedings of the Workshop on Collaborative Translation: technology, crowdsourcing, and the translator perspective

pdf bib
Crowdsourced translation for emergency response in Haiti: the global collaboration of local knowledge
Robert Munro

In the wake of the January 12 earthquake in Haiti it quickly became clear that the existing emergency response services had failed but text messages were still getting through. A number of people quickly came together to establish a text-message based emergency reporting system. There was one hurdle: the majority of the messages were in Haitian Kreyol, which for the most part was not understood by the primary emergency responders, the US Military. We therefore crowdsourced the translation of messages, allowing volunteers from within the Haitian Kreyol and French-speaking communities to translate, categorize and geolocate the messages in real-time. Collaborating online, they employed their local knowledge of locations, regional slang, abbreviations and spelling variants to process more than 40,000 messages in the first six weeks alone. According the responders this saved hundreds of lives and helped direct the first food and aid to tens of thousands. The average turn-around from a message arriving in Kreyol to it being translated, categorized, geolocated and streamed back to the responders was 10 minutes. Collaboration among translators was crucial for data-quality, motivation and community contacts, enabling richer value-adding in the translation than would have been possible from any one person.

pdf bib
Crowdsourcing and the Professional Translator
Jost Zetzsche

The recent emergence of crowdsourced translation à la Facebook or Twitter has exposed a raw nerve in the translation industry. Perceptions of ill-placed entitlement -- we are the professionals who have the "right" to translate these products -- abound. And many have felt threatened by something that carries not only a relatively newly coined term -- crowdsourcing -- but seems in and of itself completely new. Or is it?

pdf
Position Paper: Improving Translation via Targeted Paraphrasing
Yakov Kronrod | Philip Resnik | Olivia Buzek | Chang Hu | Alex Quinn | Ben Bederson

Targeted paraphrasing is a new approach to the problem of obtaining cost-effective, reasonable quality translation that makes use of simple and inexpensive human computations by monolingual speakers in combination with machine translation. The key insight behind the process is that it is possible to spot likely translation errors with only monolingual knowledge of the target language, and it is possible to generate alternative ways to say the same thing (i.e. paraphrases) with only monolingual knowledge of the source language. Evaluations demonstrate that this approach can yield substantial improvements in translation quality.

pdf
WikiBABEL: A System for Multilingual Wikipedia Content
A. Kumaran | Naren Datha | B. Ashok | K. Saravanan | Anil Ande | Ashwani Sharma | Sridhar Vedantham | Vidya Natampally | Vikram Dendi | Sandor Maurice

This position paper outlines our project – WikiBABEL – which will be released as an open source project for the creation of multilingual Wikipedia content, and has potential to produce parallel data as a by-product for Machine Translation systems research. We discuss its architecture, functionality and the user-experience components, and briefly present an analysis that emphasizes the resonance that the WikiBABEL design and the planned involvement with Wikipedia has with the open source communities in general and Wikipedians in particular.