2025
pdf
bib
abs
BLUCK: A Benchmark Dataset for Bengali Linguistic Understanding and Cultural Knowledge
Daeen Kabir
|
Minhajur Rahman Chowdhury Mahim
|
Sheikh Shafayat
|
Adnan Sadik
|
Arian Ahmed
|
Eunsu Kim
|
Alice Oh
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
In this work, we introduce BLUCK, a new dataset designed to measure the performance of Large Language Models (LLMs) in Bengali linguistic understanding and cultural knowledge. Our dataset comprises 2366 multiple-choice questions (MCQs) carefully curated from compiled collections of several college and job level examinations and spans 23 categories covering knowledge on Bangladesh’s culture and history and Bengali linguistics. We benchmarked BLUCK using 6 proprietary and 3 open-source LLMs - including GPT-4o, Claude-3.5-Sonnet, Gemini-1.5-Pro, Llama-3.3-70B-Instruct, and DeepSeekV3. Our results show that while these models perform reasonably well overall, they, however, struggles in some areas of Bengali phonetics. Although current LLMs’ performance on Bengali cultural and linguistic contexts is still not comparable to that of mainstream languages like English, our results indicate Bengali’s status as a mid-resource language. Importantly, BLUCK is also the first MCQ-based evaluation benchmark that is centered around native Bengali culture, history, and linguistics.
pdf
bib
abs
The BiGGen Bench: A Principled Benchmark for Fine-grained Evaluation of Language Models with Language Models
Seungone Kim
|
Juyoung Suk
|
Ji Yong Cho
|
Shayne Longpre
|
Chaeeun Kim
|
Dongkeun Yoon
|
Guijin Son
|
Yejin Cho
|
Sheikh Shafayat
|
Jinheon Baek
|
Sue Hyun Park
|
Hyeonbin Hwang
|
Jinkyung Jo
|
Hyowon Cho
|
Haebin Shin
|
Seongyun Lee
|
Hanseok Oh
|
Noah Lee
|
Namgyu Ho
|
Se June Joo
|
Miyoung Ko
|
Yoonjoo Lee
|
Hyungjoo Chae
|
Jamin Shin
|
Joel Jang
|
Seonghyeon Ye
|
Bill Yuchen Lin
|
Sean Welleck
|
Graham Neubig
|
Moontae Lee
|
Kyungjae Lee
|
Minjoon Seo
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
As language models (LMs) become capable of handling a wide range of tasks, their evaluation is becoming as challenging as their development. Most generation benchmarks currently assess LMs using abstract evaluation criteria-like helpfulness and harmlessness-which often lack the flexibility and granularity of human assessment. Additionally, these benchmarks tend to focus disproportionately on specific capabilities such as instruction following, leading to coverage bias. To overcome these limitations, we introduce the BiGGen Bench, a principled generation benchmark designed to thoroughly evaluate nine distinct capabilities of LMs across 77 diverse tasks. A key feature of the BiGGen Bench is its use of instance-specific evaluation criteria, closely mirroring the nuanced discernment of human evaluation. We apply this benchmark to assess 100 frontier LMs using five evaluator LMs. Our code, data, and evaluation results are all publicly available at https://github.com/prometheus-eval/prometheus-eval.
2024
pdf
bib
abs
LangBridge: Multilingual Reasoning Without Multilingual Supervision
Dongkeun Yoon
|
Joel Jang
|
Sungdong Kim
|
Seungone Kim
|
Sheikh Shafayat
|
Minjoon Seo
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We introduce LangBridge, a zero-shot approach to adapt language models for multilingual reasoning tasks without multilingual supervision. LangBridge operates by bridging two models, each specialized in different aspects: (1) one specialized in understanding multiple languages (e.g., mT5 encoder) and (2) one specialized in reasoning (e.g., MetaMath). LangBridge connects the two models by introducing minimal trainable parameters between them. Despite utilizing only English data for training, LangBridge considerably enhances the performance of language models on low-resource languages across mathematical reasoning, code completion, logical reasoning, and commonsense reasoning. Our analysis suggests that the efficacy of LangBridge stems from the language-agnostic characteristics of multilingual representations. We publicly release our code and models.
pdf
bib
abs
BEnQA: A Question Answering Benchmark for Bengali and English
Sheikh Shafayat
|
H M Quamran Hasan
|
Minhajur Rahman Chowdhury Mahim
|
Rifki Afina Putri
|
James Thorne
|
Alice Oh
Findings of the Association for Computational Linguistics: ACL 2024
In this study, we introduce BEnQA, a dataset comprising parallel Bengali and English exam questions for middle and high school levels in Bangladesh. Our dataset consists of approximately 5K questions covering several subjects in science with different types of questions, including factual, application, and reasoning-based questions. We benchmark several Large Language Models (LLMs) with our parallel dataset and observe a notable performance disparity between the models in Bengali and English. We also investigate some prompting methods, and find that Chain-of-Thought prompting is beneficial mostly on reasoning questions, but not so much on factual ones. We also find that appending English translation helps to answer questions in Bengali. Our findings point to promising future research directions for improving the performance of LLMs in Bengali and more generally in low-resource languages.