HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs
Tsz Chung Cheng, Chung Shing Cheng, Chaak-ming Lau, Eugene Lam, Wong Chun Yat, Hoi On Yu, Cheuk Hei Chong
Abstract
The ability of language models to comprehend and interact in diverse linguistic and cultural landscapes is crucial. The Cantonese language used in Hong Kong presents unique challenges for natural language processing due to its rich cultural nuances and lack of dedicated evaluation datasets. The HKCanto-Eval benchmark addresses this gap by evaluating the performance of large language models (LLMs) on Cantonese language understanding tasks, extending to English and Written Chinese for cross-lingual evaluation. HKCanto-Eval integrates cultural and linguistic nuances intrinsic to Hong Kong, providing a robust framework for assessing language models in realistic scenarios. Additionally, the benchmark includes questions designed to tap into the underlying linguistic metaknowledge of the models. Our findings indicate that while proprietary models generally outperform open-weight models, significant limitations remain in handling Cantonese-specific linguistic and cultural knowledge, highlighting the need for more targeted training data and evaluation methods. The code can be accessed at https://github.com/hon9kon9ize/hkeval2025.- Anthology ID:
- 2025.conll-1.1
- Volume:
- Proceedings of the 29th Conference on Computational Natural Language Learning
- Month:
- July
- Year:
- 2025
- Address:
- Vienna, Austria
- Editors:
- Gemma Boleda, Michael Roth
- Venues:
- CoNLL | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1–11
- Language:
- URL:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.1/
- DOI:
- Cite (ACL):
- Tsz Chung Cheng, Chung Shing Cheng, Chaak-ming Lau, Eugene Lam, Wong Chun Yat, Hoi On Yu, and Cheuk Hei Chong. 2025. HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs. In Proceedings of the 29th Conference on Computational Natural Language Learning, pages 1–11, Vienna, Austria. Association for Computational Linguistics.
- Cite (Informal):
- HKCanto-Eval: A Benchmark for Evaluating Cantonese Language Understanding and Cultural Comprehension in LLMs (Cheng et al., CoNLL 2025)
- PDF:
- https://preview.aclanthology.org/acl25-workshop-ingestion/2025.conll-1.1.pdf