Rifki Putri
2024
BEnQA: A Question Answering Benchmark for Bengali and English
Sheikh Shafayat
|
H Hasan
|
Minhajur Mahim
|
Rifki Putri
|
James Thorne
|
Alice Oh
Findings of the Association for Computational Linguistics ACL 2024
In this study, we introduce BEnQA, a dataset comprising parallel Bengali and English exam questions for middle and high school levels in Bangladesh. Our dataset consists of approximately 5K questions covering several subjects in science with different types of questions, including factual, application, and reasoning-based questions. We benchmark several Large Language Models (LLMs) with our parallel dataset and observe a notable performance disparity between the models in Bengali and English. We also investigate some prompting methods, and find that Chain-of-Thought prompting is beneficial mostly on reasoning questions, but not so much on factual ones. We also find that appending English translation helps to answer questions in Bengali. Our findings point to promising future research directions for improving the performance of LLMs in Bengali and more generally in low-resource languages.
Cendol: Open Instruction-tuned Generative Large Language Models for Indonesian Languages
Samuel Cahyawijaya
|
Holy Lovenia
|
Fajri Koto
|
Rifki Putri
|
Wawan Cenggoro
|
Jhonson Lee
|
Salsabil Akbar
|
Emmanuel Dave
|
Nuurshadieq Nuurshadieq
|
Muhammad Mahendra
|
Rr Putri
|
Bryan Wilie
|
Genta Winata
|
Alham Aji
|
Ayu Purwarianti
|
Pascale Fung
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Large language models (LLMs) show remarkable human-like capability in various domains and languages. To bridge this quality gap, we introduce Cendol, a collection of Indonesian LLMs encompassing both decoder-only and encoder-decoder architectures across a range of model sizes. We highlight Cendol’s effectiveness across a diverse array of tasks, attaining ~20% improvement, and demonstrate its capability to generalize to unseen tasks and indigenous languages of Indonesia. Furthermore, Cendol models showcase improved human favorability despite their limitations in capturing indigenous knowledge and cultural values in Indonesia. In addition, we discuss the shortcomings of parameter-efficient tunings, such as LoRA, for language adaptation. Alternatively, we propose the usage of vocabulary adaptation to enhance efficiency. Lastly, we evaluate the safety of Cendol and showcase that safety in pre-training in one language such as English is transferable to low-resource languages, such as Indonesian, even without RLHF and safety fine-tuning.
Search
Co-authors
- Sheikh Shafayat 1
- H Hasan 1
- Minhajur Mahim 1
- James Thorne 1
- Alice Oh 1
- show all...