2018
pdf
abs
Towards a Computational Lexicon for Moroccan Darija: Words, Idioms, and Constructions
Jamal Laoudi
|
Claire Bonial
|
Lucia Donatelli
|
Stephen Tratz
|
Clare Voss
Proceedings of the Joint Workshop on Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)
In this paper, we explore the challenges of building a computational lexicon for Moroccan Darija (MD), an Arabic dialect spoken by over 32 million people worldwide but which only recently has begun appearing frequently in written form in social media. We raise the question of what belongs in such a lexicon and start by describing our work building traditional word-level lexicon entries with their English translations. We then discuss challenges in translating idiomatic MD text that led to creating multi-word expression lexicon entries whose meanings could not be fully derived from the individual words. Finally, we provide a preliminary exploration of constructions to be considered for inclusion in an MD constructicon by translating examples of English constructions and examining their MD counterparts.
2016
pdf
Toward Temporally-aware MT: Can Information Extraction Help Preserve Temporal Interpretation?
Taylor Cassidy
|
Jamal Laoudi
|
Clare Voss
Conferences of the Association for Machine Translation in the Americas: MT Users' Track
2014
pdf
abs
Finding Romanized Arabic Dialect in Code-Mixed Tweets
Clare Voss
|
Stephen Tratz
|
Jamal Laoudi
|
Douglas Briesch
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
Recent computational work on Arabic dialect identification has focused primarily on building and annotating corpora written in Arabic script. Arabic dialects however also appear written in Roman script, especially in social media. This paper describes our recent work developing tweet corpora and a token-level classifier that identifies a Romanized Arabic dialect and distinguishes it from French and English in tweets. We focus on Moroccan Darija, one of several spoken vernaculars in the family of Maghrebi Arabic dialects. Even given noisy, code-mixed tweets,the classifier achieved token-level recall of 93.2% on Romanized Arabic dialect, 83.2% on English, and 90.1% on French. The classifier, now integrated into our tweet conversation annotation tool (Tratz et al. 2013), has semi-automated the construction of a Romanized Arabic-dialect lexicon. Two datasets, a full list of Moroccan Darija surface token forms and a table of lexical entries derived from this list with spelling variants, as extracted from our tweet corpus collection, will be made available in the LRE MAP.
pdf
Resumptive Pronoun Detection for Modern Standard Arabic to English MT
Stephen Tratz
|
Clare Voss
|
Jamal Laoudi
Proceedings of the 3rd Workshop on Hybrid Approaches to Machine Translation (HyTra)
2013
pdf
Tweet Conversation Annotation Tool with a Focus on an Arabic Dialect, Moroccan Darija
Stephen Tratz
|
Douglas Briesch
|
Jamal Laoudi
|
Clare Voss
Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse
2012
pdf
abs
Assessing Divergence Measures for Automated Document Routing in an Adaptive MT System
Claire Jaja
|
Douglas Briesch
|
Jamal Laoudi
|
Clare Voss
Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12)
Custom machine translation (MT) engines systematically outperform general-domain MT engines when translating within the relevant custom domain. This paper investigates the use of the Jensen-Shannon divergence measure for automatically routing new documents within a translation system with multiple MT engines to the appropriate custom MT engine in order to obtain the best translation. Three distinct domains are compared, and the impact of the language, size, and preprocessing of the documents on the Jensen-Shannon score is addressed. Six test datasets are then compared to the three known-domain corpora to predict which of the three custom MT engines they would be routed to at runtime given their Jensen-Shannon scores. The results are promising for incorporating this divergence measure into a translation workflow.
2009
pdf
bib
On beyond TM: When the Translator Leads the Design of a Translation Support Framework
Reginald Hobbs
|
Clare Voss
|
Jamal Laoudi
Proceedings of Machine Translation Summit XII: Government MT User Program
2008
pdf
Boosting performance of weak MT engines automatically: using MT output to align segments & build statistical post-editors
Clare R. Voss
|
Matthew Aguirre
|
Jeffrey Micher
|
Richard Chang
|
Jamal Laoudi
|
Reginald Hobbs
Proceedings of the 12th Annual Conference of the European Association for Machine Translation
pdf
abs
MTriage: Web-enabled Software for the Creation, Machine Translation, and Annotation of Smart Documents
Reginald Hobbs
|
Jamal Laoudi
|
Clare Voss
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Progress in the Machine Translation (MT) research community, particularly for statistical approaches, is intensely data-driven. Acquiring source language documents for testing, creating training datasets for customized MT lexicons, and building parallel corpora for MT evaluation require translators and non-native speaking analysts to handle large document collections. These collections are further complicated by differences in format, encoding, source media, and access to metadata describing the documents. Automated tools that allow language professionals to quickly annotate, translate, and evaluate foreign language documents are essential to improving MT quality and efficacy. The purpose of this paper is present our research approach to improving MT through pre-processing source language documents. In particular, we will discuss the development and use of MTriage, an application environment that enables the translator to markup documents with metadata for MT parameterization and routing. The use of MTriage as a web-enabled front end to multiple MT engines has leveraged the capabilities of our human translators for creating lexicons from NFW (Not-Found-Word) lists, writing reference translations, and creating parallel corpora for MT development and evaluation.
pdf
abs
Exploitation of an Arabic Language Resource for Machine Translation Evaluation: using Buckwalter-based Lookup Tool to Augment CMU Alignment Algorithm
Clare Voss
|
Jamal Laoudi
|
Jeffrey Micher
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Voss et al. (2006) analyzed newswire translations of three DARPA GALE Arabic-English MT systems at the segment level in terms of subjective judgmen+F925t scores, automated metric scores, and correlations among these different score types. At this level of granularity, the correlations are weak. In this paper, we begin to reconcile the subjective and automated scores that underlie these correlations by explicitly grounding MT output with its Reference Translation (RT) prior to subjective or automated evaluation. The first two phases of our approach annotate {MT, RT} pairs with the same types of textual comparisons that subjects intuitively apply, while the third phase (not presented here) entails scoring the pairs: (i) automated calculation of MT-RT hits using CMU aligner from METEOR, (ii) an extension phase where our Buckwalter-based Lookup Tool serves to generate six other textual comparison categories on items in the MT output that the CMU aligner does not identify, and (iii) given the fully categorized RT & MT pair, a final adequacy score is assigned to the MT output, either by an automated metric based on weighted category counts and segment length, or by a trained human judge.
2006
pdf
abs
Task-based MT Evaluation: From Who/When/Where Extraction to Event Understanding
Jamal Laoudi
|
Calandra R. Tate
|
Clare R. Voss
Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC’06)
Task-based machine translation (MT) evaluation asks, how well do people perform text-handling tasks given MT output? This method of evaluation yields an extrinsic assessment of an MT engine, in terms of users task performance on MT output. While this method is time-consuming, its key advantage is that MT users and stakeholders understand how to interpret the assessment results. Prior experiments showed that subjects can extract individual who-, when-, and where-type elements of information from MT output passages that were not especially fluent. This paper presents the results of a pilot study to assess a slightly more complex task: when given such wh-items already identified in an MT output passage, how well can subjects properly select from and place these items into wh-typed slots to complete a sentence-template about the passages event? The results of the pilot with nearly sixty subjects, while only preliminary, indicate that this task was extremely challenging: given six test templates to complete, half of the subjects had no completely correct templates and 42% had exactly one completely correct template. The provisional interpretation of this pilot study is that event-based template completion defines a task ceiling, against which to evaluate future improvements on MT engines.