2020
pdf
abs
Inference-only sub-character decomposition improves translation of unseen logographic characters
Danielle Saunders
|
Weston Feely
|
Bill Byrne
Proceedings of the 7th Workshop on Asian Translation
Neural Machine Translation (NMT) on logographic source languages struggles when translating ‘unseen’ characters, which never appear in the training data. One possible approach to this problem uses sub-character decomposition for training and test sentences. However, this approach involves complete retraining, and its effectiveness for unseen character translation to non-logographic languages has not been fully explored. We investigate existing ideograph-based sub-character decomposition approaches for Chinese-to-English and Japanese-to-English NMT, for both high-resource and low-resource domains. For each language pair and domain we construct a test set where all source sentences contain at least one unseen logographic character. We find that complete sub-character decomposition often harms unseen character translation, and gives inconsistent results generally. We offer a simple alternative based on decomposition before inference for unseen characters only. Our approach allows flexible application, achieving translation adequacy improvements and requiring no additional models or training.
2019
pdf
abs
Controlling Japanese Honorifics in English-to-Japanese Neural Machine Translation
Weston Feely
|
Eva Hasler
|
Adrià de Gispert
Proceedings of the 6th Workshop on Asian Translation
In the Japanese language different levels of honorific speech are used to convey respect, deference, humility, formality and social distance. In this paper, we present a method for controlling the level of formality of Japanese output in English-to-Japanese neural machine translation (NMT). By using heuristics to identify honorific verb forms, we classify Japanese sentences as being one of three levels of informal, polite, or formal speech in parallel text. The English source side is marked with a feature that identifies the level of honorific speech present in the Japanese target side. We use this parallel text to train an English-Japanese NMT model capable of producing Japanese translations in different honorific speech styles for the same English input sentence.
2014
pdf
The CMU Machine Translation Systems at WMT 2014
Austin Matthews
|
Waleed Ammar
|
Archna Bhatia
|
Weston Feely
|
Greg Hanneman
|
Eva Schlinger
|
Swabha Swayamdipta
|
Yulia Tsvetkov
|
Alon Lavie
|
Chris Dyer
Proceedings of the Ninth Workshop on Statistical Machine Translation
pdf
Domain and Dialect Adaptation for Machine Translation into Egyptian Arabic
Serena Jeblee
|
Weston Feely
|
Houda Bouamor
|
Alon Lavie
|
Nizar Habash
|
Kemal Oflazer
Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)
pdf
abs
Resources for the Detection of Conventionalized Metaphors in Four Languages
Lori Levin
|
Teruko Mitamura
|
Brian MacWhinney
|
Davida Fromm
|
Jaime Carbonell
|
Weston Feely
|
Robert Frederking
|
Anatole Gershman
|
Carlos Ramirez
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
This paper describes a suite of tools for extracting conventionalized metaphors in English, Spanish, Farsi, and Russian. The method depends on three significant resources for each language: a corpus of conventionalized metaphors, a table of conventionalized conceptual metaphors (CCM table), and a set of extraction rules. Conventionalized metaphors are things like “escape from poverty” and “burden of taxation”. For each metaphor, the CCM table contains the metaphorical source domain word (such as “escape”) the target domain word (such as “poverty”) and the grammatical construction in which they can be found. The extraction rules operate on the output of a dependency parser and identify the grammatical configurations (such as a verb with a prepositional phrase complement) that are likely to contain conventional metaphors. We present results on detection rates for conventional metaphors and analysis of the similarity and differences of source domains for conventional metaphors in the four languages.
pdf
abs
The CMU METAL Farsi NLP Approach
Weston Feely
|
Mehdi Manshadi
|
Robert Frederking
|
Lori Levin
Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC'14)
While many high-quality tools are available for analyzing major languages such as English, equivalent freely-available tools for important but lower-resourced languages such as Farsi are more difficult to acquire and integrate into a useful NLP front end. We report here on an accurate and efficient Farsi analysis front end that we have assembled, which may be useful to others who wish to work with written Farsi. The pre-existing components and resources that we incorporated include the Carnegie Mellon TurboParser and TurboTagger (Martins et al., 2010) trained on the Dadegan Treebank (Rasooli et al., 2013), the Uppsala Farsi text normalizer PrePer (Seraji, 2013), the Uppsala Farsi tokenizer (Seraji et al., 2012a), and Jon Dehdaris PerStem (Jadidinejad et al., 2010). This set of tools (combined with additional normalization and tokenization modules that we have developed and made available) achieves a dependency parsing labeled attachment score of 89.49%, unlabeled attachment score of 92.19%, and label accuracy score of 91.38% on a held-out parsing test data set. All of the components and resources used are freely available. In addition to describing the components and resources, we also explain the rationale for our choices.