Arianna Bisazza


2022

pdf
Evaluating Pre-training Objectives for Low-Resource Translation into Morphologically Rich Languages
Prajit Dhar | Arianna Bisazza | Gertjan van Noord
Proceedings of the Thirteenth Language Resources and Evaluation Conference

The scarcity of parallel data is a major limitation for Neural Machine Translation (NMT) systems, in particular for translation into morphologically rich languages (MRLs). An important way to overcome the lack of parallel data is to leverage target monolingual data, which is typically more abundant and easier to collect. We evaluate a number of techniques to achieve this, ranging from back-translation to random token masking, on the challenging task of translating English into four typologically diverse MRLs, under low-resource settings. Additionally, we introduce Inflection Pre-Training (or PT-Inflect), a novel pre-training objective whereby the NMT system is pre-trained on the task of re-inflecting lemmatized target sentences before being trained on standard source-to-target language translation. We conduct our evaluation on four typologically diverse target MRLs, and find that PT-Inflect surpasses NMT systems trained only on parallel data. While PT-Inflect is outperformed by back-translation overall, combining the two techniques leads to gains in some of the evaluated language pairs.

pdf
DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
Gabriele Sarti | Arianna Bisazza | Ana Guerberof-Arenas | Antonio Toral
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.

pdf
Hyper-X: A Unified Hypernetwork for Multi-Task Multilingual Transfer
Ahmet Üstün | Arianna Bisazza | Gosse Bouma | Gertjan van Noord | Sebastian Ruder
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

Massively multilingual models are promising for transfer learning across tasks and languages. However, existing methods are unable to fully leverage training data when it is available in different task-language combinations. To exploit such heterogeneous supervision, we propose Hyper-X, a single hypernetwork that unifies multi-task and multilingual learning with efficient adaptation. It generates weights for adapter modules conditioned on both tasks and language embeddings. By learning to combine task and language-specific knowledge, our model enables zero-shot transfer for unseen languages and task-language combinations. Our experiments on a diverse set of languages demonstrate that Hyper-X achieves the best or competitive gain when a mixture of multiple resources is available, while on par with strong baseline in the standard scenario. Hyper-X is also considerably more efficient in terms of parameters and resources compared to methods that train separate adapters. Finally, Hyper-X consistently produces strong results in few-shot scenarios for new languages, showing the versatility of our approach beyond zero-shot transfer.

pdf
InDeep × NMT: Empowering Human Translators via Interpretable Neural Machine Translation
Gabriele Sarti | Arianna Bisazza
Proceedings of the 23rd Annual Conference of the European Association for Machine Translation

Neural machine translation (NMT) systems are nowadays essential components of professional translation workflows. Consequently, human translators are increasingly working as post-editors for machine-translated content. The NWO-funded InDeep project aims to empower users of Deep Learning models of text, speech, and music by improving their ability to interact with such models and interpret their behaviors. In the specific context of translation, we aim at developing new tools and methodologies to improve prediction attribution, error analysis, and controllable generation for NMT systems. These advances will be evaluated through field studies involving professional translators to assess gains in efficiency and overall enjoyability of the post-editing process.

pdf bib
UDapter: Typology-based Language Adapters for Multilingual Dependency Parsing and Sequence Labeling
Ahmet Üstün | Arianna Bisazza | Gosse Bouma | Gertjan van Noord
Computational Linguistics, Volume 48, Issue 3 - September 2022

Recent advances in multilingual language modeling have brought the idea of a truly universal parser closer to reality. However, such models are still not immune to the “curse of multilinguality”: Cross-language interference and restrained model capacity remain major obstacles. To address this, we propose a novel language adaptation approach by introducing contextual language adapters to a multilingual parser. Contextual language adapters make it possible to learn adapters via language embeddings while sharing model parameters across languages based on contextual parameter generation. Moreover, our method allows for an easy but effective integration of existing linguistic typology features into the parsing model. Because not all typological features are available for every language, we further combine typological feature prediction with parsing in a multi-task model that achieves very competitive parsing performance without the need for an external prediction system for missing features. The resulting parser, UDapter, can be used for dependency parsing as well as sequence labeling tasks such as POS tagging, morphological tagging, and NER. In dependency parsing, it outperforms strong monolingual and multilingual baselines on the majority of both high-resource and low-resource (zero-shot) languages, showing the success of the proposed adaptation approach. In sequence labeling tasks, our parser surpasses the baseline on high resource languages, and performs very competitively in a zero-shot setting. Our in-depth analyses show that adapter generation via typological features of languages is key to this success.1

2021

pdf bib
Proceedings of the 25th Conference on Computational Natural Language Learning
Arianna Bisazza | Omri Abend
Proceedings of the 25th Conference on Computational Natural Language Learning

pdf
The Effect of Efficient Messaging and Input Variability on Neural-Agent Iterated Language Learning
Yuchen Lian | Arianna Bisazza | Tessa Verhoef
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing

Natural languages display a trade-off among different strategies to convey syntactic structure, such as word order or inflection. This trade-off, however, has not appeared in recent simulations of iterated language learning with neural network agents (Chaabouni et al., 2019b). We re-evaluate this result in light of three factors that play an important role in comparable experiments from the Language Evolution field: (i) speaker bias towards efficient messaging, (ii) non systematic input languages, and (iii) learning bottleneck. Our simulations show that neural agents mainly strive to maintain the utterance type distribution observed during learning, instead of developing a more efficient or systematic language.

pdf
On the Difficulty of Translating Free-Order Case-Marking Languages
Arianna Bisazza | Ahmet Üstün | Stephan Sportel
Transactions of the Association for Computational Linguistics, Volume 9

Abstract Identifying factors that make certain languages harder to model than others is essential to reach language equality in future Natural Language Processing technologies. Free-order case-marking languages, such as Russian, Latin, or Tamil, have proved more challenging than fixed-order languages for the tasks of syntactic parsing and subject-verb agreement prediction. In this work, we investigate whether this class of languages is also more difficult to translate by state-of-the-art Neural Machine Translation (NMT) models. Using a variety of synthetic languages and a newly introduced translation challenge set, we find that word order flexibility in the source language only leads to a very small loss of NMT quality, even though the core verb arguments become impossible to disambiguate in sentences without semantic cues. The latter issue is indeed solved by the addition of case marking. However, in medium- and low-resource settings, the overall NMT quality of fixed-order languages remains unmatched.

pdf
Evaluating Text Generation from Discourse Representation Structures
Chunliu Wang | Rik van Noord | Arianna Bisazza | Johan Bos
Proceedings of the 1st Workshop on Natural Language Generation, Evaluation, and Metrics (GEM 2021)

We present an end-to-end neural approach to generate English sentences from formal meaning representations, Discourse Representation Structures (DRSs). We use a rather standard bi-LSTM sequence-to-sequence model, work with a linearized DRS input representation, and evaluate character-level and word-level decoders. We obtain very encouraging results in terms of reference-based automatic metrics such as BLEU. But because such metrics only evaluate the surface level of generated output, we develop a new metric, ROSE, that targets specific semantic phenomena. We do this with five DRS generation challenge sets focusing on tense, grammatical number, polarity, named entities and quantities. The aim of these challenge sets is to assess the neural generator’s systematicity and generalization to unseen inputs.

pdf
Input Representations for Parsing Discourse Representation Structures: Comparing English with Chinese
Chunliu Wang | Rik van Noord | Arianna Bisazza | Johan Bos
Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

Neural semantic parsers have obtained acceptable results in the context of parsing DRSs (Discourse Representation Structures). In particular models with character sequences as input showed remarkable performance for English. But how does this approach perform on languages with a different writing system, like Chinese, a language with a large vocabulary of characters? Does rule-based tokenisation of the input help, and which granularity is preferred: characters, or words? The results are promising. Even with DRSs based on English, good results for Chinese are obtained. Tokenisation offers a small advantage for English, but not for Chinese. Overall, characters are preferred as input, both for English and Chinese.

pdf
Optimal Word Segmentation for Neural Machine Translation into Dravidian Languages
Prajit Dhar | Arianna Bisazza | Gertjan van Noord
Proceedings of the 8th Workshop on Asian Translation (WAT2021)

Dravidian languages, such as Kannada and Tamil, are notoriously difficult to translate by state-of-the-art neural models. This stems from the fact that these languages are morphologically very rich as well as being low-resourced. In this paper, we focus on subword segmentation and evaluate Linguistically Motivated Vocabulary Reduction (LMVR) against the more commonly used SentencePiece (SP) for the task of translating from English into four different Dravidian languages. Additionally we investigate the optimal subword vocabulary size for each language. We find that SP is the overall best choice for segmentation, and that larger dictionary sizes lead to higher translation quality.

pdf
Understanding Cross-Lingual Syntactic Transfer in Multilingual Recurrent Neural Networks
Prajit Dhar | Arianna Bisazza
Proceedings of the 23rd Nordic Conference on Computational Linguistics (NoDaLiDa)

It is now established that modern neural language models can be successfully trained on multiple languages simultaneously without changes to the underlying architecture, providing an easy way to adapt a variety of NLP models to low-resource languages. But what kind of knowledge is really shared among languages within these models? Does multilingual training mostly lead to an alignment of the lexical representation spaces or does it also enable the sharing of purely grammatical knowledge? In this paper we dissect different forms of cross-lingual transfer and look for its most determining factors, using a variety of models and probing tasks. We find that exposing our LMs to a related language does not always increase grammatical knowledge in the target language, and that optimal conditions for lexical-semantic transfer may not be optimal for syntactic transfer.

pdf
Using Confidential Data for Domain Adaptation of Neural Machine Translation
Sohyung Kim | Arianna Bisazza | Fatih Turkmen
Proceedings of the Third Workshop on Privacy in Natural Language Processing

We study the problem of domain adaptation in Neural Machine Translation (NMT) when domain-specific data cannot be shared due to confidentiality or copyright issues. As a first step, we propose to fragment data into phrase pairs and use a random sample to fine-tune a generic NMT model instead of the full sentences. Despite the loss of long segments for the sake of confidentiality protection, we find that NMT quality can considerably benefit from this adaptation, and that further gains can be obtained with a simple tagging technique.

2020

pdf
Linguistically Motivated Subwords for English-Tamil Translation: University of Groningen’s Submission to WMT-2020
Prajit Dhar | Arianna Bisazza | Gertjan van Noord
Proceedings of the Fifth Conference on Machine Translation

This paper describes our submission for the English-Tamil news translation task of WMT-2020. The various techniques and Neural Machine Translation (NMT) models used by our team are presented and discussed, including back-translation, fine-tuning and word dropout. Additionally, our experiments show that using a linguistically motivated subword segmentation technique (Ataman et al., 2017) does not consistently outperform the more widely used, non-linguistically motivated SentencePiece algorithm (Kudo and Richardson, 2018), despite the agglutinative nature of Tamil morphology.

pdf bib
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
André Martins | Helena Moniz | Sara Fumega | Bruno Martins | Fernando Batista | Luisa Coheur | Carla Parra | Isabel Trancoso | Marco Turchi | Arianna Bisazza | Joss Moorkens | Ana Guerberof | Mary Nurminen | Lena Marg | Mikel L. Forcada
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation

pdf
UDapter: Language Adaptation for Truly Universal Dependency Parsing
Ahmet Üstün | Arianna Bisazza | Gosse Bouma | Gertjan van Noord
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Recent advances in multilingual dependency parsing have brought the idea of a truly universal parser closer to reality. However, cross-language interference and restrained model capacity remain major obstacles. To address this, we propose a novel multilingual task adaptation approach based on contextual parameter generation and adapter modules. This approach enables to learn adapters via language embeddings while sharing model parameters across languages. It also allows for an easy but effective integration of existing linguistic typology features into the parsing network. The resulting parser, UDapter, outperforms strong monolingual and multilingual baselines on the majority of both high-resource and low-resource (zero-shot) languages, showing the success of the proposed adaptation approach. Our in-depth analyses show that soft parameter sharing via typological features is key to this success.

2019

pdf
Zero-shot Dependency Parsing with Pre-trained Multilingual Sentence Representations
Ke Tran | Arianna Bisazza
Proceedings of the 2nd Workshop on Deep Learning Approaches for Low-Resource NLP (DeepLo 2019)

We investigate whether off-the-shelf deep bidirectional sentence representations (Devlin et al., 2019) trained on a massively multilingual corpus (multilingual BERT) enable the development of an unsupervised universal dependency parser. This approach only leverages a mix of monolingual corpora in many languages and does not require any translation data making it applicable to low-resource languages. In our experiments we outperform the best CoNLL 2018 language-specific systems in all of the shared task’s six truly low-resource languages while using a single system. However, we also find that (i) parsing accuracy still varies dramatically when changing the training languages and (ii) in some target languages zero-shot transfer fails under all tested conditions, raising concerns on the ‘universality’ of the whole approach.

2018

pdf
Examining the Tip of the Iceberg: A Data Set for Idiom Translation
Marzieh Fadaee | Arianna Bisazza | Christof Monz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
Evaluation of Machine Translation Performance Across Multiple Genres and Languages
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

pdf
The Lazy Encoder: A Fine-Grained Analysis of the Role of Morphology in Neural Machine Translation
Arianna Bisazza | Clara Tump
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Neural sequence-to-sequence models have proven very effective for machine translation, but at the expense of model interpretability. To shed more light into the role played by linguistic structure in the process of neural machine translation, we perform a fine-grained analysis of how various source-side morphological features are captured at different levels of the NMT encoder while varying the target language. Differently from previous work, we find no correlation between the accuracy of source morphology encoding and translation quality. We do find that morphological features are only captured in context and only to the extent that they are directly transferable to the target words.

pdf
The Importance of Being Recurrent for Modeling Hierarchical Structure
Ke Tran | Arianna Bisazza | Christof Monz
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Recent work has shown that recurrent neural networks (RNNs) can implicitly capture and exploit hierarchical information when trained to solve common natural language processing tasks (Blevins et al., 2018) such as language modeling (Linzen et al., 2016; Gulordava et al., 2018) and neural machine translation (Shi et al., 2016). In contrast, the ability to model structured data with non-recurrent neural networks has received little attention despite their success in many NLP tasks (Gehring et al., 2017; Vaswani et al., 2017). In this work, we compare the two architectures—recurrent versus non-recurrent—with respect to their ability to model hierarchical structure and find that recurrency is indeed important for this purpose. The code and data used in our experiments is available at https://github.com/ ketranm/fan_vs_rnn

pdf bib
Keynote: Unveiling the Linguistic Weaknesses of Neural MT
Arianna Bisazza
Proceedings of the 13th Conference of the Association for Machine Translation in the Americas (Volume 1: Research Track)

pdf
Does Syntactic Knowledge in Multilingual Language Models Transfer Across Languages?
Prajit Dhar | Arianna Bisazza
Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP

Recent work has shown that neural models can be successfully trained on multiple languages simultaneously. We investigate whether such models learn to share and exploit common syntactic knowledge among the languages on which they are trained. This extended abstract presents our preliminary results.

2017

pdf
Learning Topic-Sensitive Word Representations
Marzieh Fadaee | Arianna Bisazza | Christof Monz
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

Distributed word representations are widely used for modeling words in NLP tasks. Most of the existing models generate one representation per word and do not consider different meanings of a word. We present two approaches to learn multiple topic-sensitive representations per word by using Hierarchical Dirichlet Process. We observe that by modeling topics and integrating topic distributions for each document we obtain representations that are able to distinguish between different meanings of a given word. Our models yield statistically significant improvements for the lexical substitution task indicating that commonly used single word representations, even when combined with contextual information, are insufficient for this task.

pdf
Data Augmentation for Low-Resource Neural Machine Translation
Marzieh Fadaee | Arianna Bisazza | Christof Monz
Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)

The quality of a Neural Machine Translation system depends substantially on the availability of sizable parallel corpora. For low-resource language pairs this is not the case, resulting in poor translation quality. Inspired by work in computer vision, we propose a novel data augmentation approach that targets low-frequency words by generating new sentence pairs containing rare words in new, synthetically created contexts. Experimental results on simulated low-resource settings show that our method improves translation quality by up to 2.9 BLEU points over the baseline and up to 3.2 BLEU over back-translation.

pdf
Dynamic Data Selection for Neural Machine Translation
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

Intelligent selection of training data has proven a successful technique to simultaneously increase training efficiency and translation performance for phrase-based machine translation (PBMT). With the recent increase in popularity of neural machine translation (NMT), we explore in this paper to what extent and how NMT can also benefit from data selection. While state-of-the-art data selection (Axelrod et al., 2011) consistently performs well for PBMT, we show that gains are substantially lower for NMT. Next, we introduce ‘dynamic data selection’ for NMT, a method in which we vary the selected subset of training data between different training epochs. Our experiments show that the best results are achieved when applying a technique we call ‘gradual fine-tuning’, with improvements up to +2.6 BLEU over the original data selection approach and up to +3.1 BLEU over a general baseline.

2016

pdf
Neural versus Phrase-Based Machine Translation Quality: a Case Study
Luisa Bentivogli | Arianna Bisazza | Mauro Cettolo | Marcello Federico
Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing

pdf
Recurrent Memory Networks for Language Modeling
Ke Tran | Arianna Bisazza | Christof Monz
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Surveys: A Survey of Word Reordering in Statistical Machine Translation: Computational Models and Language Phenomena
Arianna Bisazza | Marcello Federico
Computational Linguistics, Volume 42, Issue 2 - June 2016

pdf
A Simple but Effective Approach to Improve Arabizi-to-English Statistical Machine Translation
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of the 2nd Workshop on Noisy User-generated Text (WNUT)

A major challenge for statistical machine translation (SMT) of Arabic-to-English user-generated text is the prevalence of text written in Arabizi, or Romanized Arabic. When facing such texts, a translation system trained on conventional Arabic-English data will suffer from extremely low model coverage. In addition, Arabizi is not regulated by any official standardization and therefore highly ambiguous, which prevents rule-based approaches from achieving good translation results. In this paper, we improve Arabizi-to-English machine translation by presenting a simple but effective Arabizi-to-Arabic transliteration pipeline that does not require knowledge by experts or native Arabic speakers. We incorporate this pipeline into a phrase-based SMT system, and show that translation quality after automatically transliterating Arabizi to Arabic yields results that are comparable to those achieved after human transliteration.

pdf
Measuring the Effect of Conversational Aspects on Machine Translation Quality
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers

Research in statistical machine translation (SMT) is largely driven by formal translation tasks, while translating informal text is much more challenging. In this paper we focus on SMT for the informal genre of dialogues, which has rarely been addressed to date. Concretely, we investigate the effect of dialogue acts, speakers, gender, and text register on SMT quality when translating fictional dialogues. We first create and release a corpus of multilingual movie dialogues annotated with these four dialogue-specific aspects. When measuring translation performance for each of these variables, we find that BLEU fluctuations between their categories are often significantly larger than randomly expected. Following this finding, we hypothesize and show that SMT of fictional dialogues benefits from adaptation towards dialogue acts and registers. Finally, we find that male speakers are harder to translate and use more vulgar language than female speakers, and that vulgarity is often not preserved during translation.

2015

pdf
A distributed inflection model for translating into morphologically rich languages
Ke Tran | Arianna Bisazza | Christof Monz
Proceedings of Machine Translation Summit XV: Papers

pdf
What’s in a Domain? Analyzing Genre and Topic Differences in Statistical Machine Translation
Marlies van der Wees | Arianna Bisazza | Wouter Weerkamp | Christof Monz
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)

pdf
Translation Model Adaptation Using Genre-Revealing Text Features
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of the Second Workshop on Discourse in Machine Translation

pdf
Five Shades of Noise: Analyzing Machine Translation Errors in User-Generated Text
Marlies van der Wees | Arianna Bisazza | Christof Monz
Proceedings of the Workshop on Noisy User-generated Text

2014

pdf
Class-Based Language Modeling for Translating into Morphologically Rich Languages
Arianna Bisazza | Christof Monz
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers

pdf
Word Translation Prediction for Morphologically Rich Languages with Bilingual Neural Networks
Ke M. Tran | Arianna Bisazza | Christof Monz
Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)

2013

pdf
Efficient Solutions for Word Reordering in German-English Phrase-Based Statistical Machine Translation
Arianna Bisazza | Marcello Federico
Proceedings of the Eighth Workshop on Statistical Machine Translation

pdf
Dynamically Shaping the Reordering Search Space of Phrase-Based Statistical Machine Translation
Arianna Bisazza | Marcello Federico
Transactions of the Association for Computational Linguistics, Volume 1

Defining the reordering search space is a crucial issue in phrase-based SMT between distant languages. In fact, the optimal trade-off between accuracy and complexity of decoding is nowadays reached by harshly limiting the input permutation space. We propose a method to dynamically shape such space and, thus, capture long-range word movements without hurting translation quality nor decoding time. The space defined by loose reordering constraints is dynamically pruned through a binary classifier that predicts whether a given input word should be translated right after another. The integration of this model into a phrase-based decoder improves a strong Arabic-English baseline already including state-of-the-art early distortion cost (Moore and Quirk, 2007) and hierarchical phrase orientation models (Galley and Manning, 2008). Significant improvements in the reordering of verbs are achieved by a system that is notably faster than the baseline, while bleu and meteor remain stable, or even increase, at a very high distortion limit.

2012

pdf
Modified Distortion Matrices for Phrase-Based Statistical Machine Translation
Arianna Bisazza | Marcello Federico
Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

pdf
Cutting the Long Tail: Hybrid Language Models for Translation Style Adaptation
Arianna Bisazza | Marcello Federico
Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics

2011

pdf
Fill-up versus interpolation methods for phrase-based SMT adaptation
Arianna Bisazza | Nick Ruiz | Marcello Federico
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper compares techniques to combine diverse parallel corpora for domain-specific phrase-based SMT system training. We address a common scenario where little in-domain data is available for the task, but where large background models exist for the same language pair. In particular, we focus on phrase table fill-up: a method that effectively exploits background knowledge to improve model coverage, while preserving the more reliable information coming from the in-domain corpus. We present experiments on an emerging transcribed speech translation task – the TED talks. While performing similarly in terms of BLEU and NIST scores to the popular log-linear and linear interpolation techniques, filled-up translation models are more compact and easy to tune by minimum error training.

2010

pdf
FBK at WMT 2010: Word Lattices for Morphological Reduction and Chunk-Based Reordering
Christian Hardmeier | Arianna Bisazza | Marcello Federico
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf
Chunk-Based Verb Reordering in VSO Sentences for Arabic-English Statistical Machine Translation
Arianna Bisazza | Marcello Federico
Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR

pdf
FBK @ IWSLT 2010
Arianna Bisazza | Ioannis Klasinas | Mauro Cettolo | Marcello Federico
Proceedings of the 7th International Workshop on Spoken Language Translation: Evaluation Campaign

This year FBK took part in the BTEC translation task, with source languages Arabic and Turkish and target language English, and in the new TALK task, source English and target French. We worked in the framework of phrase-based statistical machine translation aiming to improve coverage of models in presence of rich morphology, on one side, and to make better use of available resources through data selection techniques. New morphological segmentation rules were developed for Turkish-English. The combination of several Turkish segmentation schemes into a lattice input led to an improvement wrt to last year. The use of additional training data was explored for Arabic-English, while on the English to French task improvement was achieved over a strong baseline by automatically selecting relevant and high quality data from the available training corpora.

2009

pdf
FBK at IWSLT 2009
Nicola Bertoldi | Arianna Bisazza | Mauro Cettolo | Germán Sanchis-Trilles | Marcello Federico
Proceedings of the 6th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper reports on the participation of FBK at the IWSLT 2009 Evaluation. This year we worked on the Arabic-English and Turkish-English BTEC tasks with a special effort on linguistic preprocessing techniques involving morphological segmentation. In addition, we investigated the adaptation problem in the development of systems for the Chinese-English and English-Chinese challenge tasks; in particular, we explored different ways for clustering training data into topic or dialog-specific subsets: by producing (and combining) smaller but more focused models, we intended to make better use of the available training data, with the ultimate purpose of improving translation quality.

pdf bib
Morphological pre-processing for Turkish to English statistical machine translation
Arianna Bisazza | Marcello Federico
Proceedings of the 6th International Workshop on Spoken Language Translation: Papers

We tried to cope with the complex morphology of Turkish by applying different schemes of morphological word segmentation to the training and test data of a phrase-based statistical machine translation system. These techniques allow for a considerable reduction of the training dictionary, and lower the out-of-vocabulary rate of the test set. By minimizing differences between lexical granularities of Turkish and English we can produce more refined alignments and a better modeling of the translation task. Morphological segmentation is highly language dependent and requires a fair amount of linguistic knowledge in its development phase. Yet it is fast and light-weight – does not involve syntax – and appears to benefit our IWSLT09 system: our best segmentation scheme associated to a simple lexical approximation technique achieved a 50% reduction of out-of-vocabulary rate and over 5 point BLEU improvement above the baseline.