Cristian García-Romero


2026

Machine translation (MT) has achieved near-human quality for some language pairs, yet its output remains distinct from human translation, primarily in its predictability. While MT systems generate low-perplexity text, humans produce less predictable outputs. This raises the question of whether humans can intuitively use this difference in predictability to distinguish between human- and machine-translated text. We report on a study with 30 native Spanish speakers tasked with identifying the origin of English-to-Spanish translations. We compared their performance against two perplexity-based baselines: a large language model capturing fluency, and a neural MT model, conditioned on the source text, capturing both fluency and adequacy. Our findings reveal that human judgments correlate with fluency-based perplexity, but show no correlation with the perplexity that also accounts for adequacy. This suggests that annotators’ decisions are driven by the target text’s fluency. Consequently, a simple computational baseline using source-aware perplexity significantly outperforms human annotators. This work contributes to a deeper understanding of human perception of MT, highlighting a potential bias in current evaluation protocols toward fluency over adequacy. This bias may lead to an overestimation of the capabilities of highly fluent systems and underscores the need for evaluation methods ensuring translation adequacy is not overlooked.

2024

The LiLowLa (“Lightweight neural translation technologies for low-resource languages”) project aims to enhance machine translation (MT) and translation memory (TM) technologies, particularly for low-resource language pairs, where adequate linguistic resources are scarce. The project started in September 2022 and will run till August 2025.

2022

An important goal of the MaCoCu project is to improve EU-specific NLP systems that concern their Digital Service Infrastructures (DSIs). In this paper we aim at boosting the creation of such domain-specific NLP systems. To do so, we explore the feasibility of building an automatic classifier that allows to identify which segments in a generic (potentially parallel) corpus are relevant for a particular DSI. We create an evaluation data set by crawling DSI-specific web domains and then compare different strategies to build our DSI classifier for text in three languages: English, Spanish and Dutch. We use pre-trained (multilingual) language models to perform the classification, with zero-shot classification for Spanish and Dutch. The results are promising, as we are able to classify DSIs with between 70 and 80% accuracy, even without in-language training data. A manual annotation of the data revealed that we can also find DSI-specific data on crawled texts from general web domains with reasonable accuracy. We publicly release all data, predictions and code, as to allow future investigations in whether exploiting this DSI-specific data actually leads to improved performance on particular applications, such as machine translation.
We introduce the project “MaCoCu: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages”, funded by the Connecting Europe Facility, which is aimed at building monolingual and parallel corpora for under-resourced European languages. The approach followed consists of crawling large amounts of textual data from carefully selected top-level domains of the Internet, and then applying a curation and enrichment pipeline. In addition to corpora, the project will release successive versions of the free/open-source web crawling and curation software used.