2025
pdf
bib
abs
Batayan: A Filipino NLP benchmark for evaluating Large Language Models
Jann Railey Montalan
|
Jimson Paulo Layacan
|
David Demitri Africa
|
Richell Isaiah S. Flores
|
Michael T. Lopez Ii
|
Theresa Denise Magsajo
|
Anjanette Cayabyab
|
William Chandra Tjhi
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recent advances in large language models (LLMs) have demonstrated remarkable capabilities on widely benchmarked high-resource languages. However, linguistic nuances of under-resourced languages remain unexplored. We introduce Batayan, a holistic Filipino benchmark that systematically evaluates LLMs across three key natural language processing (NLP) competencies: understanding, reasoning, and generation. Batayan consolidates eight tasks, three of which have not existed prior for Filipino corpora, covering both Tagalog and code-switched Taglish utterances. Our rigorous, native-speaker-driven adaptation and validation processes ensures fluency and authenticity to the complex morphological and syntactic structures of Filipino, alleviating the pervasive translationese bias in existing Filipino corpora. We report empirical results on a variety of open-source and commercial LLMs, highlighting significant performance gaps that signal the under-representation of Filipino in pre-training corpora, the unique hurdles in modeling Filipino’s rich morphology and construction, and the importance of explicit Filipino language support. Moreover, we discuss the practical challenges encountered in dataset construction and propose principled solutions for building culturally and linguistically-faithful resources in under-represented languages. We also provide a public evaluation suite as a clear foundation for iterative, community-driven progress in Filipino NLP.
pdf
bib
abs
SEA-HELM: Southeast Asian Holistic Evaluation of Language Models
Yosephine Susanto
|
Adithya Venkatadri Hulagadri
|
Jann Railey Montalan
|
Jian Gang Ngui
|
Xianbin Yong
|
Wei Qi Leong
|
Hamsawardhini Rengarajan
|
Peerat Limkonchotiwat
|
Yifan Mai
|
William Chandra Tjhi
Findings of the Association for Computational Linguistics: ACL 2025
With the rapid emergence of novel capabilities in Large Language Models (LLMs), the need for rigorous multilingual and multiculturalbenchmarks that are integrated has become more pronounced. Though existing LLM benchmarks are capable of evaluating specificcapabilities of LLMs in English as well as in various mid- to low-resource languages, including those in the Southeast Asian (SEA)region, a comprehensive and culturally representative evaluation suite for the SEA languages has not been developed thus far.Here, we present SEA-HELM, a holistic linguistic and cultural LLM evaluation suite that emphasises SEA languages, comprisingfive core pillars: (1) NLP CLASSICS, (2) LLM-SPECIFICS, (3) SEA LINGUISTICS, (4) SEA CULTURE, (5) SAFETY. SEA-HELMcurrently supports Filipino, Indonesian, Tamil, Thai, and Vietnamese. We also introduce the SEA-HELM leaderboard, which allows users to understand models’ multilingual and multicultural performance in a systematic and user-friendly manner. We make the SEA-HELM evaluation code publicly available.
pdf
bib
abs
The Thai Universal Dependency Treebank
Panyut Sriwirote
|
Wei Qi Leong
|
Charin Polpanumas
|
Santhawat Thanyawong
|
William Chandra Tjhi
|
Wirote Aroonmanakun
|
Attapol T. Rutherford
Transactions of the Association for Computational Linguistics, Volume 13
Automatic dependency parsing of Thai sentences has been underexplored, as evidenced by the lack of large Thai dependency treebanks with complete dependency structures and the lack of a published evaluation of state-of-the-art models, especially transformer-based parsers. In this work, we addressed these gaps by introducing the Thai Universal Dependency Treebank (TUD), a new Thai treebank consisting of 3,627 trees annotated according to the Universal Dependencies (UD) framework. We then benchmarked 92 dependency parsing models that incorporate pretrained transformers on Thai-PUD and our TUD, achieving state-of-the-art results and shedding light on the optimal model components for Thai dependency parsing. Our error analysis of the models also reveals that polyfunctional words, serial verb construction, and lack of rich morphosyntactic features present main challenges for Thai dependency parsing.
2024
pdf
bib
Kalahi: A handcrafted, grassroots cultural LLM evaluation suite for Filipino
Jann Railey Montalan
|
Jian Gang Ngui
|
Wei Qi Leong
|
Yosephine Susanto
|
Hamsawardhini Rengarajan
|
Alham Fikri Aji
|
William Chandra Tjhi
Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation
pdf
bib
abs
Aalamaram: A Large-Scale Linguistically Annotated Treebank for the Tamil Language
A M Abirami
|
Wei Qi Leong
|
Hamsawardhini Rengarajan
|
D Anitha
|
R Suganya
|
Himanshu Singh
|
Kengatharaiyer Sarveswaran
|
William Chandra Tjhi
|
Rajiv Ratn Shah
Proceedings of the 7th Workshop on Indian Language Data: Resources and Evaluation
Tamil is a relatively low-resource language in the field of Natural Language Processing (NLP). Recent years have seen a growth in Tamil NLP datasets in Natural Language Understanding (NLU) or Natural Language Generation (NLG) tasks, but high-quality linguistic resources remain scarce. In order to alleviate this gap in resources, this paper introduces Aalamaram, a treebank with rich linguistic annotations for the Tamil language. It is hitherto the largest publicly available Tamil treebank with almost 10,000 sentences from diverse sources and is annotated for the tasks of Part-of-speech (POS) tagging, Named Entity Recognition (NER), Morphological Parsing and Dependency Parsing. Close attention has also been paid to multi-word segmentation, especially in the context of Tamil clitics. Although the treebank is based largely on the Universal Dependencies (UD) specifications, significant effort has been made to adjust the annotation rules according to the idiosyncrasies and complexities of the Tamil language, thereby providing a valuable resource for linguistic research and NLP developments.