2025
pdf
bib
abs
Warmup Generations: A Task-Agnostic Approach for Guiding Sequence-to-Sequence Learning with Unsupervised Initial State Generation
Senyu Li
|
Zipeng Sun
|
Jiayi Wang
|
Xue Liu
|
Pontus Stenetorp
|
Siva Reddy
|
David Ifeoluwa Adelani
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Traditional supervised fine-tuning (SFT) strategies for sequence-to-sequence tasks often train models to directly generate the target output. Recent work has shown that guiding models with intermediate steps—such as keywords, outlines, or reasoning chains—can significantly improve performance, coherence, and interpretability. However, these methods often depend on predefined intermediate formats and annotated data, limiting their scalability and generalizability. In this work, we introduce a task-agnostic framework that enables models to generate intermediate “warmup” sequences. These warmup sequences, serving as an initial state for subsequent generation, are optimized to enhance the probability of generating the target sequence without relying on external supervision or human-designed structures. Drawing inspiration from reinforcement learning principles, our method iteratively refines these intermediate steps to maximize their contribution to the final output, similar to reward-driven optimization in reinforcement learning with human feedback. Experimental results across tasks such as translation, summarization, and multi-choice question answering for logical reasoning show that our approach outperforms traditional SFT methods, and offers a scalable and flexible solution for sequence-to-sequence tasks.
pdf
bib
abs
SSA-COMET: Do LLMs Outperform Learned Metrics in Evaluating MT for Under-Resourced African Languages?
Senyu Li
|
Jiayi Wang
|
Felermino D. M. A. Ali
|
Colin Cherry
|
Daniel Deutsch
|
Eleftheria Briakou
|
Rui Sousa-Silva
|
Henrique Lopes Cardoso
|
Pontus Stenetorp
|
David Ifeoluwa Adelani
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Evaluating machine translation (MT) quality for under-resourced African languages remains a significant challenge, as existing metrics often suffer from limited language coverage and poor performance in low-resource settings. While recent efforts, such as AfriCOMET, have addressed some of the issues, they are still constrained by small evaluation sets, a lack of publicly available training data tailored to African languages, and inconsistent performance in extremely low-resource scenarios. In this work, we introduce SSA-MTE, a large-scale human-annotated MT evaluation (MTE) dataset covering 13 African language pairs from the News domain, with over 63,000 sentence-level annotations from a diverse set of MT systems. Based on this data, we develop SSA-COMET and SSA-COMET-QE, improved reference-based and reference-free evaluation metrics. We also benchmark prompting-based approaches using state-of-the-art LLMs like GPT-4o and Claude. Our experimental results show that SSA-COMET models significantly outperform AfriCOMET and are competitive with the strongest LLM (Gemini 2.5 Pro) evaluated in our study, particularly on low-resource languages such as Twi, Luo, and Yoruba. All resources are released under open licenses to support future research.
pdf
bib
abs
Multilingual Language Model Pretraining using Machine-translated Data
Jiayi Wang
|
Yao Lu
|
Maurice Weber
|
Max Ryabinin
|
David Ifeoluwa Adelani
|
Yihong Chen
|
Raphael Tang
|
Pontus Stenetorp
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
English, as a very high-resource language, enables the pretraining of high-quality large language models (LLMs). However, the same can not be said for most other languages, likely due to a gap in the quality and diversity of available multilingual pretraining corpora. In this work, we find that documents machine-translated from a high-quality English corpus, can contribute significantly to the pretraining quality of multilingual LLMs. Concretely, we translate FineWeb-Edu, a high-quality English web corpus, into nine languages. resulting in a 1.7-trillion-token corpus, which we call TransWebEdu and pretrain a 1.3B-parameter model, TransWebLLM, from scratch on this corpus. Across Non-English understanding and reasoning tasks, we show that TransWebLLM matches or even outperforms multilingual LLMs of similar size, including Llama3.2, Qwen2.5, and Gemma3, despite being trained on an order of magnitude less data. Moreover, we show that adding fewer than 5% of TransWebLLM’s training tokens as domain-specific data for continued pretraining yields state-of-the-art results in Arabic, Indonesian, Swahili, and Welsh for understanding and commonsense reasoning tasks. To promote reproducibility, we release our corpus and models under Open Source Initiative-approved licenses.
pdf
bib
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)
David Ifeoluwa Adelani
|
Catherine Arnett
|
Duygu Ataman
|
Tyler A. Chang
|
Hila Gonen
|
Rahul Raja
|
Fabian Schmidt
|
David Stap
|
Jiayi Wang
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)
pdf
bib
abs
Findings of the WMT25 Shared Task on Automated Translation Evaluation Systems: Linguistic Diversity is Challenging and References Still Help
Alon Lavie
|
Greg Hanneman
|
Sweta Agrawal
|
Diptesh Kanojia
|
Chi-Kiu Lo
|
Vilém Zouhar
|
Frederic Blain
|
Chrysoula Zerva
|
Eleftherios Avramidis
|
Sourabh Deoghare
|
Archchana Sindhujan
|
Jiayi Wang
|
David Ifeoluwa Adelani
|
Brian Thompson
|
Tom Kocmi
|
Markus Freitag
|
Daniel Deutsch
Proceedings of the Tenth Conference on Machine Translation
The WMT25 Shared Task on Automated Translation Evaluation Systems evaluates metrics and quality estimation systems that assess the quality of language translation systems. This task unifies and consolidates the separate WMT shared tasks on Machine Translation Evaluation Metrics and Quality Estimation from previous years. Our primary goal is to encourage the development and assessment of new state-of-the-art translation quality evaluation systems. The shared task this year consisted of three subtasks: (1) segment-level quality score prediction, (2) span-level translation error annotation, and (3) quality-informed segment-level error correction. The evaluation data for the shared task were provided by the General MT shared task and were complemented by “challenge sets” from both the organizers and participants. Task 1 results indicate the strong performance of large LLMs at the system level, whilereference-based baseline metrics outperform LLMs at the segment level. Task 2 results indicate that accurate error detection and balancing precision and recall are persistent challenges. Task 3 results show that minimal editing is challenging even when informed by quality indicators. Robustness across the broad diversity of languages remains a major challenge across all three subtasks.
pdf
bib
abs
Evaluating WMT 2025 Metrics Shared Task Submissions on the SSA-MTE African Challenge Set
Senyu Li
|
Felermino Dario Mario Ali
|
Jiayi Wang
|
Rui Sousa-Silva
|
Henrique Lopes Cardoso
|
Pontus Stenetorp
|
Colin Cherry
|
David Ifeoluwa Adelani
Proceedings of the Tenth Conference on Machine Translation
This paper presents the evaluation of submissions to the WMT 2025 Metrics Shared Task on the SSA-MTE challenge set, a large-scale benchmark for machine translation evaluation (MTE) in Sub-Saharan African languages. The SSA-MTE test sets contains over 12,768 human-annotated adequacy scores across 11 language pairs sourced from English, French, and Portuguese, spanning 6 commercial and open-source MT systems. Results show that correlations with human judgments remain generally low, with most systems falling below the 0.4 Spearman threshold for medium-level agreement. Performance varies widely across language pairs, with most correlations under 0.4; in some extremely low-resource cases, such as Portuguese–Emakhuwa, correlations drop to around 0.1, underscoring the difficulty of evaluating MT for very low-resource African languages. These findings highlight the urgent need for more research on robust, generalizable MT evaluation methods tailored for African languages.
2024
pdf
bib
abs
Strings from the Library of Babel: Random Sampling as a Strong Baseline for Prompt Optimisation
Yao Lu
|
Jiayi Wang
|
Raphael Tang
|
Sebastian Riedel
|
Pontus Stenetorp
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Recent prompt optimisation approaches use the generative nature of language models to produce prompts – even rivaling the performance of human-curated prompts. In this paper, we demonstrate that randomly sampling tokens from the model vocabulary as “separators” can be as effective as language models for prompt-style text classification. Our experiments show that random separators are competitive baselines, having less than a 1% difference compared to previous self-optimisation methods and showing a 12% average relative improvement over strong human baselines across nine text classification tasks and eight language models. We further analyse this phenomenon in detail using three different random generation strategies, establishing that the language space is rich with potentially good separators, with a greater than 40% average chance that a randomly drawn separator performs better than human-curated separators. These observations challenge the common assumption that an effective prompt should be human readable or task relevant and establish a strong baseline for prompt optimisation research.
pdf
bib
abs
AfriMTE and AfriCOMET: Enhancing COMET to Embrace Under-resourced African Languages
Jiayi Wang
|
David Ifeoluwa Adelani
|
Sweta Agrawal
|
Marek Masiak
|
Ricardo Rei
|
Eleftheria Briakou
|
Marine Carpuat
|
Xuanli He
|
Sofia Bourhim
|
Andiswa Bukula
|
Muhidin Mohamed
|
Temitayo Olatoye
|
Tosin Adewumi
|
Hamam Mokayed
|
Christine Mwase
|
Wangui Kimotho
|
Foutse Yuehgoh
|
Anuoluwapo Aremu
|
Jessica Ojo
|
Shamsuddeen Hassan Muhammad
|
Salomey Osei
|
Abdul-Hakeem Omotayo
|
Chiamaka Chukwuneke
|
Perez Ogayo
|
Oumaima Hourrane
|
Salma El Anigri
|
Lolwethu Ndolela
|
Thabiso Mangwana
|
Shafie Abdi Mohamed
|
Hassan Ayinde
|
Oluwabusayo Olufunke Awoyomi
|
Lama Alkhaled
|
Sana Al-azzawi
|
Naome A. Etori
|
Millicent Ochieng
|
Clemencia Siro
|
Njoroge Kiragu
|
Eric Muchiri
|
Wangari Kimotho
|
Lyse Naomi Wamba Momo
|
Daud Abolade
|
Simbiat Ajao
|
Iyanuoluwa Shode
|
Ricky Macharm
|
Ruqayya Nasir Iro
|
Saheed S. Abdullahi
|
Stephen E. Moore
|
Bernard Opoku
|
Zainab Akinjobi
|
Abeeb Afolabi
|
Nnaemeka Obiefuna
|
Onyekachi Raphael Ogbu
|
Sam Ochieng’
|
Verrah Akinyi Otiende
|
Chinedu Emmanuel Mbonu
|
Sakayo Toadoum Sari
|
Yao Lu
|
Pontus Stenetorp
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Despite the recent progress on scaling multilingual machine translation (MT) to several under-resourced African languages, accurately measuring this progress remains challenging, since evaluation is often performed on n-gram matching metrics such as BLEU, which typically show a weaker correlation with human judgments. Learned metrics such as COMET have higher correlation; however, the lack of evaluation data with human ratings for under-resourced languages, complexity of annotation guidelines like Multidimensional Quality Metrics (MQM), and limited language coverage of multilingual encoders have hampered their applicability to African languages. In this paper, we address these challenges by creating high-quality human evaluation data with simplified MQM guidelines for error detection and direct assessment (DA) scoring for 13 typologically diverse African languages. Furthermore, we develop AfriCOMET: COMET evaluation metrics for African languages by leveraging DA data from well-resourced languages and an African-centric multilingual encoder (AfroXLM-R) to create the state-of-the-art MT evaluation metrics for African languages with respect to Spearman-rank correlation with human judgments (0.441).
pdf
bib
abs
Are LLMs Breaking MT Metrics? Results of the WMT24 Metrics Shared Task
Markus Freitag
|
Nitika Mathur
|
Daniel Deutsch
|
Chi-Kiu Lo
|
Eleftherios Avramidis
|
Ricardo Rei
|
Brian Thompson
|
Frederic Blain
|
Tom Kocmi
|
Jiayi Wang
|
David Ifeoluwa Adelani
|
Marianna Buchicchio
|
Chrysoula Zerva
|
Alon Lavie
Proceedings of the Ninth Conference on Machine Translation
The WMT24 Metrics Shared Task evaluated the performance of automatic metrics for machine translation (MT), with a major focus on LLM-based translations that were generated as part of the WMT24 General MT Shared Task. As LLMs become increasingly popular in MT, it is crucial to determine whether existing evaluation metrics can accurately assess the output of these systems.To provide a robust benchmark for this evaluation, human assessments were collected using Multidimensional Quality Metrics (MQM), continuing the practice from recent years. Furthermore, building on the success of the previous year, a challenge set subtask was included, requiring participants to design contrastive test suites that specifically target a metric’s ability to identify and penalize different types of translation errors.Finally, the meta-evaluation procedure was refined to better reflect real-world usage of MT metrics, focusing on pairwise accuracy at both the system- and segment-levels.We present an extensive analysis on how well metrics perform on three language pairs: English to Spanish (Latin America), Japanese to Chinese, and English to German. The results strongly confirm the results reported last year, that fine-tuned neural metrics continue to perform well, even when used to evaluate LLM-based translation systems.
pdf
bib
abs
Evaluating WMT 2024 Metrics Shared Task Submissions on AfriMTE (the African Challenge Set)
Jiayi Wang
|
David Ifeoluwa Adelani
|
Pontus Stenetorp
Proceedings of the Ninth Conference on Machine Translation
The AfriMTE challenge set from WMT 2024 Metrics Shared Task aims to evaluate the capabilities of evaluation metrics for machine translation on low-resource African languages, which primarily assesses cross-lingual transfer learning and generalization of machine translation metrics across a wide range of under-resourced languages. In this paper, we analyze the submissions to WMT 2024 Metrics Shared Task. Our findings indicate that language-specific adaptation, cross-lingual transfer learning, and larger language model sizes contribute significantly to improved metric performance. Moreover, supervised models with relatively moderate sizes demonstrate robust performance, when augmented with specific language adaptation for low-resource African languages. Finally, submissions show promising results for language pairs including Darija-French, English-Egyptian Arabic, and English-Swahili. However, significant challenges persist for extremely low-resource languages such as English-Luo and English-Twi, highlighting areas for future research and improvement in machine translation metrics for African languages.
2023
pdf
bib
abs
Easy Guided Decoding in Providing Suggestions for Interactive Machine Translation
Ke Wang
|
Xin Ge
|
Jiayi Wang
|
Yuqi Zhang
|
Yu Zhao
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Machine translation technology has made great progress in recent years, but it cannot guarantee error-free results. Human translators perform post-editing on machine translations to correct errors in the scene of computer aided translation. In favor of expediting the post-editing process, many works have investigated machine translation in interactive modes, in which machines can automatically refine the rest of translations constrained by human’s edits. Translation Suggestion (TS), as an interactive mode to assist human translators, requires machines to generate alternatives for specific incorrect words or phrases selected by human translators. In this paper, we utilize the parameterized objective function of neural machine translation (NMT) and propose a novel constrained decoding algorithm, namely Prefix-Suffix Guided Decoding (PSGD), to deal with the TS problem without additional training. Compared to state-of-the-art lexical-constrained decoding method, PSGD improves translation quality by an average of 10.6 BLEU and reduces time overhead by an average of 63.4% on benchmark datasets. Furthermore, on both the WeTS and the WMT 2022 Translation Suggestion datasets, it is superior over other supervised learning systems trained with TS annotated data.
2022
pdf
bib
abs
Distinguishing Non-natural from Natural Adversarial Samples for More Robust Pre-trained Language Model
Jiayi Wang
|
Rongzhou Bao
|
Zhuosheng Zhang
|
Hai Zhao
Findings of the Association for Computational Linguistics: ACL 2022
Recently, the problem of robustness of pre-trained language models (PrLMs) has received increasing research interest. Latest studies on adversarial attacks achieve high attack success rates against PrLMs, claiming that PrLMs are not robust. However, we find that the adversarial samples that PrLMs fail are mostly non-natural and do not appear in reality. We question the validity of the current evaluation of robustness of PrLMs based on these non-natural adversarial samples and propose an anomaly detector to evaluate the robustness of PrLMs with more natural adversarial samples. We also investigate two applications of the anomaly detector: (1) In data augmentation, we employ the anomaly detector to force generating augmented data that are distinguished as non-natural, which brings larger gains to the accuracy of PrLMs. (2) We apply the anomaly detector to a defense framework to enhance the robustness of PrLMs. It can be used to defend all types of attacks and achieves higher accuracy on both adversarial samples and compliant samples than other defense frameworks.
pdf
bib
abs
TSMind: Alibaba and Soochow University’s Submission to the WMT22 Translation Suggestion Task
Xin Ge
|
Ke Wang
|
Jiayi Wang
|
Nini Xiao
|
Xiangyu Duan
|
Yu Zhao
|
Yuqi Zhang
Proceedings of the Seventh Conference on Machine Translation (WMT)
This paper describes the joint submission of Alibaba and Soochow University to the WMT 2022 Shared Task on Translation Suggestion (TS). We participate in the English to/from German and English to/from Chinese tasks. Basically, we utilize the model paradigm fine-tuning on the downstream tasks based on large-scale pre-trained models, which has recently achieved great success. We choose FAIR’s WMT19 English to/from German news translation system and MBART50 for English to/from Chinese as our pre-trained models. Considering the task’s condition of limited use of training data, we follow the data augmentation strategies provided by Yang to boost our TS model performance. And we further involve the dual conditional cross-entropy model and GPT-2 language model to filter augmented data. The leader board finally shows that our submissions are ranked first in three of four language directions in the Naive TS task of the WMT22 Translation Suggestion task.
2021
pdf
bib
Defending Pre-trained Language Models from Adversarial Word Substitution Without Performance Sacrifice
Rongzhou Bao
|
Jiayi Wang
|
Hai Zhao
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021
pdf
bib
abs
Beyond Glass-Box Features: Uncertainty Quantification Enhanced Quality Estimation for Neural Machine Translation
Ke Wang
|
Yangbin Shi
|
Jiayi Wang
|
Yuqi Zhang
|
Yu Zhao
|
Xiaolin Zheng
Findings of the Association for Computational Linguistics: EMNLP 2021
Quality Estimation (QE) plays an essential role in applications of Machine Translation (MT). Traditionally, a QE system accepts the original source text and translation from a black-box MT system as input. Recently, a few studies indicate that as a by-product of translation, QE benefits from the model and training data’s information of the MT system where the translations come from, and it is called the “glass-box QE”. In this paper, we extend the definition of “glass-box QE” generally to uncertainty quantification with both “black-box” and “glass-box” approaches and design several features deduced from them to blaze a new trial in improving QE’s performance. We propose a framework to fuse the feature engineering of uncertainty quantification into a pre-trained cross-lingual language model to predict the translation quality. Experiment results show that our method achieves state-of-the-art performances on the datasets of WMT 2020 QE shared task.
pdf
bib
abs
QEMind: Alibaba’s Submission to the WMT21 Quality Estimation Shared Task
Jiayi Wang
|
Ke Wang
|
Boxing Chen
|
Yu Zhao
|
Weihua Luo
|
Yuqi Zhang
Proceedings of the Sixth Conference on Machine Translation
Quality Estimation, as a crucial step of quality control for machine translation, has been explored for years. The goal is to to investigate automatic methods for estimating the quality of machine translation results without reference translations. In this year’s WMT QE shared task, we utilize the large-scale XLM-Roberta pre-trained model and additionally propose several useful features to evaluate the uncertainty of the translations to build our QE system, named QEMind . The system has been applied to the sentence-level scoring task of Direct Assessment and the binary score prediction task of Critical Error Detection. In this paper, we present our submissions to the WMT 2021 QE shared task and an extensive set of experimental results have shown us that our multilingual systems outperform the best system in the Direct Assessment QE task of WMT 2020.
2020
pdf
bib
abs
Computer Assisted Translation with Neural Quality Estimation and Automatic Post-Editing
Ke Wang
|
Jiayi Wang
|
Niyu Ge
|
Yangbin Shi
|
Yu Zhao
|
Kai Fan
Findings of the Association for Computational Linguistics: EMNLP 2020
With the advent of neural machine translation, there has been a marked shift towards leveraging and consuming the machine translation results. However, the gap between machine translation systems and human translators needs to be manually closed by post-editing. In this paper, we propose an end-to-end deep learning framework of the quality estimation and automatic post-editing of the machine translation output. Our goal is to provide error correction suggestions and to further relieve the burden of human translators through an interpretable model. To imitate the behavior of human translators, we design three efficient delegation modules – quality estimation, generative post-editing, and atomic operation post-editing and construct a hierarchical model based on them. We examine this approach with the English–German dataset from WMT 2017 APE shared task and our experimental results can achieve the state-of-the-art performance. We also verify that the certified translators can significantly expedite their post-editing processing with our model in human evaluation.
pdf
bib
abs
Alibaba’s Submission for the WMT 2020 APE Shared Task: Improving Automatic Post-Editing with Pre-trained Conditional Cross-Lingual BERT
Jiayi Wang
|
Ke Wang
|
Kai Fan
|
Yuqi Zhang
|
Jun Lu
|
Xin Ge
|
Yangbin Shi
|
Yu Zhao
Proceedings of the Fifth Conference on Machine Translation
The goal of Automatic Post-Editing (APE) is basically to examine the automatic methods for correcting translation errors generated by an unknown machine translation (MT) system. This paper describes Alibaba’s submissions to the WMT 2020 APE Shared Task for the English-German language pair. We design a two-stage training pipeline. First, a BERT-like cross-lingual language model is pre-trained by randomly masking target sentences alone. Then, an additional neural decoder on the top of the pre-trained model is jointly fine-tuned for the APE task. We also apply an imitation learning strategy to augment a reasonable amount of pseudo APE training data, potentially preventing the model to overfit on the limited real training data and boosting the performance on held-out data. To verify our proposed model and data augmentation, we examine our approach with the well-known benchmarking English-German dataset from the WMT 2017 APE task. The experiment results demonstrate that our system significantly outperforms all other baselines and achieves the state-of-the-art performance. The final results on the WMT 2020 test dataset show that our submission can achieve +5.56 BLEU and -4.57 TER with respect to the official MT baseline.
2018
pdf
bib
abs
Alibaba Submission for WMT18 Quality Estimation Task
Jiayi Wang
|
Kai Fan
|
Bo Li
|
Fengming Zhou
|
Boxing Chen
|
Yangbin Shi
|
Luo Si
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
The goal of WMT 2018 Shared Task on Translation Quality Estimation is to investigate automatic methods for estimating the quality of machine translation results without reference translations. This paper presents the QE Brain system, which proposes the neural Bilingual Expert model as a feature extractor based on conditional target language model with a bidirectional transformer and then processes the semantic representations of source and the translation output with a Bi-LSTM predictive model for automatic quality estimation. The system has been applied to the sentence-level scoring and ranking tasks as well as the word-level tasks for finding errors for each word in translations. An extensive set of experimental results have shown that our system outperformed the best results in WMT 2017 Quality Estimation tasks and obtained top results in WMT 2018.