Nicholas Cheng
2025
SEA-LION: Southeast Asian Languages in One Network
Raymond Ng | Thanh Ngan Nguyen | Huang Yuli | Tai Ngee Chia | Leong Wai Yi | Wei Qi Leong | Xianbin Yong | Jian Gang Ngui | Yosephine Susanto | Nicholas Cheng | Hamsawardhini Rengarajan | Peerat Limkonchotiwat | Adithya Venkatadri Hulagadri | Kok Wai Teng | Yeo Yeow Tong | Bryan Siow | Wei Yi Teo | Tan Choon Meng | Brandon Ong | Zhi Hao Ong | Jann Railey Montalan | Adwin Chan | Sajeban Antonyrex | Ren Lee | Esther Choa | David Ong Tat-Wee | Bing Jie Darius Liu | William Chandra Tjhi | Erik Cambria | Leslie Teo
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Raymond Ng | Thanh Ngan Nguyen | Huang Yuli | Tai Ngee Chia | Leong Wai Yi | Wei Qi Leong | Xianbin Yong | Jian Gang Ngui | Yosephine Susanto | Nicholas Cheng | Hamsawardhini Rengarajan | Peerat Limkonchotiwat | Adithya Venkatadri Hulagadri | Kok Wai Teng | Yeo Yeow Tong | Bryan Siow | Wei Yi Teo | Tan Choon Meng | Brandon Ong | Zhi Hao Ong | Jann Railey Montalan | Adwin Chan | Sajeban Antonyrex | Ren Lee | Esther Choa | David Ong Tat-Wee | Bing Jie Darius Liu | William Chandra Tjhi | Erik Cambria | Leslie Teo
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Recently, Large Language Models (LLMs) have dominated much of the artificial intelligence scene with their ability to process and generate natural languages. However, the majority of LLM research and development remains English-centric, leaving low-resource languages such as those in the Southeast Asian (SEA) region under-represented. To address this representation gap, we introduce Llama-SEA-LION-v3-8B-IT and Gemma-SEA-LION-v3-9B-IT, two cutting-edge multilingual LLMs designed for SEA languages. The SEA-LION family of LLMs supports 11 SEA languages, namely English, Chinese, Indonesian, Vietnamese, Malay, Thai, Burmese, Lao, Filipino, Tamil, and Khmer. Our work leverages large-scale multilingual continued pre-training with a comprehensive post-training regime involving multiple stages of instruction fine-tuning, alignment, and model merging. Evaluation results on multilingual benchmarks indicate that our models achieve state-of-the-art performance across LLMs supporting SEA languages. We open-source the models to benefit the wider SEA community.
Search
Fix author
Co-authors
- Sajeban Antonyrex 1
- Erik Cambria 1
- Adwin Chan 1
- Tai Ngee Chia 1
- Esther Choa 1
- Adithya Venkatadri Hulagadri 1
- Ren Lee 1
- Wei Qi Leong 1
- Peerat Limkonchotiwat 1
- Bing Jie Darius Liu 1
- Tan Choon Meng 1
- Jann Railey Montalan 1
- Raymond Ng 1
- Jian Gang Ngui 1
- Thanh Ngan Nguyen 1
- Brandon Ong 1
- Zhi Hao Ong 1
- Hamsawardhini Rengarajan 1
- Bryan Siow 1
- Yosephine Susanto 1
- David Ong Tat-Wee 1
- Kok Wai Teng 1
- Wei Yi Teo 1
- Leslie Teo 1
- William Chandra Tjhi 1
- Yeo Yeow Tong 1
- Leong Wai Yi 1
- Xianbin Yong 1
- Huang Yuli 1