M. Federico


2016

pdf
Modern MT: a new open-source machine translation platform for the translation industry
U. Germann | E. Barbu | L. Bentivogli | N. Bertoldi | N. Bogoychev | C. Buck | D. Caroselli | L. Carvalho | A. Cattelan | R. Cettolo | M. Federico | B. Haddow | D. Madl | L. Mastrostefano | P. Mathur | A. Ruopp | A. Samiotou | V. Sudharshan | M. Trombetti | Jan van der Meer
Proceedings of the 19th Annual Conference of the European Association for Machine Translation: Projects/Products

2012

pdf bib
Overview of the IWSLT 2012 evaluation campaign
M. Federico | M. Cettolo | L. Bentivogli | M. Paul | S. Stüker
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign

We report on the ninth evaluation campaign organized by the IWSLT workshop. This year, the evaluation offered multiple tracks on lecture translation based on the TED corpus, and one track on dialog translation from Chinese to English based on the Olympic trilingual corpus. In particular, the TED tracks included a speech transcription track in English, a speech translation track from English to French, and text translation tracks from English to French and from Arabic to English. In addition to the official tracks, ten unofficial MT tracks were offered that required translating TED talks into English from either Chinese, Dutch, German, Polish, Portuguese (Brazilian), Romanian, Russian, Slovak, Slovene, or Turkish. 16 teams participated in the evaluation and submitted a total of 48 primary runs. All runs were evaluated with objective metrics, while runs of the official translation tracks were also ranked by crowd-sourced judges. In particular, subjective ranking for the TED task was performed on a progress test which permitted direct comparison of the results from this year against the best results from the 2011 round of the evaluation campaign.

pdf
FBK’s machine translation systems for IWSLT 2012’s TED lectures
N. Ruiz | A. Bisazza | R. Cattoni | M. Federico
Proceedings of the 9th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper reports on FBK’s Machine Translation (MT) submissions at the IWSLT 2012 Evaluation on the TED talk translation tasks. We participated in the English-French and the Arabic-, Dutch-, German-, and Turkish-English translation tasks. Several improvements are reported over our last year baselines. In addition to using fill-up combinations of phrase-tables for domain adaptation, we explore the use of corpora filtering based on cross-entropy to produce concise and accurate translation and language models. We describe challenges encountered in under-resourced languages (Turkish) and language-specific preprocessing needs.

2011

pdf
FBK@IWSLT 2011
N. Ruiz | A. Bisazza | F. Brugnara | D. Falavigna | D. Giuliani | S. Jaber | R. Gretter | M. Federico
Proceedings of the 8th International Workshop on Spoken Language Translation: Evaluation Campaign

This paper reports on the participation of FBK at the IWSLT 2011 Evaluation: namely in the English ASR track, the Arabic-English MT track and the English-French MT and SLT tracks. Our ASR system features acoustic models trained on a portion of the TED talk recordings that was automatically selected according to the fidelity of the provided transcriptions. Three decoding steps are performed interleaved by acoustic feature normalization and acoustic model adaptation. Concerning the MT and SLT systems, besides language specific pre-processing and the automatic introduction of punctuation in the ASR output, two major improvements are reported over our last year baselines. First, we applied a fill-up method for phrase-table adaptation; second, we explored the use of hybrid class-based language models to better capture the language style of public speeches.

2005

pdf
A Look inside the ITC-irst SMT System
M. Cettolo | M. Federico | N. Bertholdi | R. Cattoni | B. Chen
Proceedings of Machine Translation Summit X: Posters

This paper presents a look inside the ITC-irst large-vocabulary SMT system developed for the NIST 2005 Chinese-to-English evaluation campaign. Experiments on official NIST test sets provide a thorough overview of the performance of the system, supplying information on how single components contribute to the global performance. The presented system exhibits performance comparable to that of the best systems participating in the NIST 2002-2004 MT evaluation campaigns: on the three test sets, achieved BLEU scores are 26.35%, 26.92% and 28.13%, respectively.