Hannah Liu
2026
SiniticMTError: A Machine Translation Dataset with Error Annotations for Sinitic Languages
Hannah Liu | Junghyun Min | Annie En-Shiun Lee | Ethan Yue Heng Cheung | Shou-Yi Hung | Elsie Chan | Shiyao Qian | Runtong Liang | Kimlan Huynh | Wing Yu Yip | York Hay Ng | Tsz Fung Yau | Ka Ieng Charlotte Lo | You-Wei Wu | Richard Tzong-Han Tsai
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Hannah Liu | Junghyun Min | Annie En-Shiun Lee | Ethan Yue Heng Cheung | Shou-Yi Hung | Elsie Chan | Shiyao Qian | Runtong Liang | Kimlan Huynh | Wing Yu Yip | York Hay Ng | Tsz Fung Yau | Ka Ieng Charlotte Lo | You-Wei Wu | Richard Tzong-Han Tsai
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Despite major advances in machine translation (MT) in recent years, progress remains limited for many low-resource languages that lack large-scale training data and linguistic resources. In this paper, we introduce SINITICMTERROR, a novel fine-grained dataset that builds on existing parallel corpora to provide error span, error type, and error severity annotations in machine-translated examples from English to Mandarin, Cantonese, and Wu Chinese, along with a Mandarin-Hokkien component derived from a non-parallel source. Our dataset serves as a resource for the MT community to fine-tune models with error detection capabilities, supporting research on translation quality estimation, error-aware generation, and low-resource language evaluation. We also establish baseline results using language models to benchmark translation error detection performance. Specifically, we evaluate multiple open source and closed source LLMs using span-level and correlation-based MQM metrics, revealing their limited precision, underscoring the need for our dataset. Finally, we report our rigorous annotation process by native speakers, with analyses on pilot studies, iterative feedback, insights, and patterns in error type and severity.
OasisSimp: An Open-source Asian-English Sentence Simplification Dataset
Hannah Liu | Murphy Tian | Iqra Ali | Haonan Gao | Qiaoyiwen Wu | Blair Yang | Uthayasanker Thayasivam | Annie En-Shiun Lee | Pakawat Nakwijit | Surangika Ranathunga | Ravi Shekhar
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Hannah Liu | Murphy Tian | Iqra Ali | Haonan Gao | Qiaoyiwen Wu | Blair Yang | Uthayasanker Thayasivam | Annie En-Shiun Lee | Pakawat Nakwijit | Surangika Ranathunga | Ravi Shekhar
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Text simplification aims to make complex text more accessible by reducing linguistic complexity while preserving the original meaning. However, progress in this area remains limited for mid-resource and low-resource languages due to the scarcity of high-quality data. To address this gap, we introduce OasisSimp, a multilingual dataset for sentence-level text simplification covering five languages: English, Sinhala, Tamil, Pashto, and Thai. Among these, no prior sentence simplification datasets exist for Thai, Pashto, and Tamil, while limited data is available for Sinhala. Each language simplification dataset was created through direct human annotation, where trained annotators followed detailed guidelines to simplify sentences while maintaining meaning, fluency, and grammatical correctness. We evaluate eight open-weight multilingual Large Language Models (LLMs) on OasisSimp and observe substantial performance disparities between high-resource and low-resource languages, highlighting the simplification challenges in multilingual settings. OasisSimp thus provides both a valuable multilingual resource and a challenging benchmark, revealing the limitations of current LLM-based simplification methods and paving the way for future research in low-resource text simplification. The dataset will be open-sourced upon acceptance.
2024
SIB-200: A Simple, Inclusive, and Big Evaluation Dataset for Topic Classification in 200+ Languages and Dialects
David Ifeoluwa Adelani | Hannah Liu | Xiaoyu Shen | Nikita Vassilyev | Jesujoba O. Alabi | Yanke Mao | Haonan Gao | En-Shiun Annie Lee
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
David Ifeoluwa Adelani | Hannah Liu | Xiaoyu Shen | Nikita Vassilyev | Jesujoba O. Alabi | Yanke Mao | Haonan Gao | En-Shiun Annie Lee
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite the progress in building multilingual language models, evaluation is often limited to a few languages with available datasets which excludes a large number of low-resource languages. In this paper, we create SIB-200—a large-scale open-sourced benchmark dataset for topic classification in 205 languages and dialects to address the lack of evaluation dataset for Natural Language Understanding (NLU). For many of the languages covered in SIB-200, this is the first publicly available evaluation dataset for NLU. The dataset is based on Flores-200 machine translation corpus. We annotated the English portion of the dataset and extended the sentence-level annotation to the remaining 204 languages covered in the corpus. Despite the simplicity of this task, our evaluation in full-supervised setting, cross-lingual transfer setting and prompting of large language model setting show that there is still a large gap between the performance of high-resource and low-resource languages when multilingual evaluation is scaled to numerous world languages. We found that languages unseen during the pre-training of multilingual language models, languages from under-represented families (like Nilotic and Altantic-Congo), and languages from the regions of Africa, Americas, Oceania and South East Asia, often have the lowest performance on our topic classification dataset. We hope our dataset %will encourages a more inclusive evaluation of multilingual language models on a more diverse set of languages.
Search
Fix author
Co-authors
- En-Shiun Annie Lee 3
- Haonan Gao 2
- David Ifeoluwa Adelani 1
- Jesujoba Alabi 1
- Iqra Ali 1
- Elsie Chan 1
- Ethan Yue Heng Cheung 1
- Shou-Yi Hung 1
- Kimlan Huynh 1
- Runtong Liang 1
- Ka Ieng Charlotte Lo 1
- Yanke Mao 1
- Junghyun Min 1
- Pakawat Nakwijit 1
- York Hay Ng 1
- Shiyao Qian 1
- Surangika Ranathunga 1
- Ravi Shekhar 1
- Xiaoyu Shen 1
- Uthayasanker Thayasivam 1
- Murphy Tian 1
- Richard Tzong-Han Tsai 1
- Nikita Vassilyev 1
- You-Wei Wu 1
- Qiaoyiwen Wu 1
- Blair Yang 1
- Tsz Fung Yau 1
- Wing Yu Yip 1