Gianluca Vico
2026
How Important is ‘Perfect’ English for Machine Translation Prompts?
Patrícia Schmidtová | Niyati Bafna | Seth Aycock | Gianluca Vico | Wiktor Kamzela | Kathy Hämmerl | Vilém Zouhar
Findings of the Association for Computational Linguistics: EACL 2026
Patrícia Schmidtová | Niyati Bafna | Seth Aycock | Gianluca Vico | Wiktor Kamzela | Kathy Hämmerl | Vilém Zouhar
Findings of the Association for Computational Linguistics: EACL 2026
Large language models (LLMs) show state-of-the-art performance in machine translation, but are also known to be sensitive to errors in user prompts. Given these models are largely trained on and respond best to prompts in standard English, this may affect the quality of LLM outputs for second language English speakers as well as real-world lay users, with potentially disproportionate effects on the former. We explore this effect by modeling a range of error types exhibited by such users, motivated by studies of L2 English, and quantifying their impact on LLM performance. We work with two related tasks: machine translation and machine translation evaluation. We find that LLMs-as-MT are brittle to natural spelling errors but not to errors at the phrasal level. However, the variance in quality caused by these errors is lower than the variance over the initial prompt choice, suggesting that “perfect English” for a given prompt is less important than choosing a good prompt. Since lay users and L2 speakers may use non-optimal prompts as well as display imperfect language skills, our work calls for increasing the resilience of model performance to both these phenomena to best serve a diverse user base, both from a robustness and fairness perspective.
2024
CUNI and LMU Submission to the MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval
Katharina Hämmerl | Andrei-Alexandru Manea | Gianluca Vico | Jindřich Helcl | Jindřich Libovický
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
Katharina Hämmerl | Andrei-Alexandru Manea | Gianluca Vico | Jindřich Helcl | Jindřich Libovický
Proceedings of the Fourth Workshop on Multilingual Representation Learning (MRL 2024)
We present the joint CUNI and LMU submission to the MRL 2024 Shared Task on Multi-lingual Multi-task Information Retrieval.The shared task objective was to explore how we can deploy modern methods in NLP in multi-lingual low-resource settings, tested on two sub-tasks: Named-entity recognition and question answering.Our solutions to the subtasks are based on data acquisition and model adaptation.We compare the performance of our submitted systems with the translate-test approachwhich proved to be the most useful in the previous edition of the shared task.Our results show that using more data as well as fine-tuning recent multilingual pre-trained models leads to considerable improvements over the translate-test baseline.Our code is available at https://github.com/ufal/mrl2024-multilingual-ir-shared-task.
2023
Larth: Dataset and Machine Translation for Etruscan
Gianluca Vico | Gerasimos Spanakis
Proceedings of the Ancient Language Processing Workshop
Gianluca Vico | Gerasimos Spanakis
Proceedings of the Ancient Language Processing Workshop
Etruscan is an ancient language spoken in Italy from the 7th century BC to the 1st century AD. There are no native speakers of the language at the present day, and its resources are scarce, as there are an estimated 12,000 known inscriptions. To the best of our knowledge, there are no publicly available Etruscan corpora for natural language processing. Therefore, we propose a dataset for machine translation from Etruscan to English, which contains 2891 translated examples from existing academic sources. Some examples are extracted manually, while others are acquired in an automatic way. Along with the dataset, we benchmark different machine translation models observing that it is possible to achieve a BLEU score of 10.1 with a small transformer model. Releasing the dataset can help enable future research on this language, similar languages or other languages with scarce resources.