Abdullah Ibne Hanif Arean
2025
SOMAJGYAAN: A Dataset for Evaluating LLMs on Bangla Culture, Social Knowledge, and Low-Resource Language Adaptation
Fariha Anjum Shifa
|
Muhtasim Ibteda Shochcho
|
Abdullah Ibne Hanif Arean
|
Mohammad Ashfaq Ur Rahman
|
Akm Moshiur Rahman Mazumder
|
Ahaj Mahhin Faiak
|
Md Fahim
|
M Ashraful Amin
|
Amin Ahsan Ali
|
Akmmahbubur Rahman
Proceedings of the 14th International Joint Conference on Natural Language Processing and the 4th Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics
Despite significant progress in large language models (LLMs), their knowledge and evaluation continue to be centered around high-resource languages, leaving critical gaps in low-resource settings. This raises questions about how effectively LLMs handle subjects that require locally relevant knowledge. To address this challenge, we need a robust dataset that reflects the knowledge of underrepresented regions such as Bangladesh. In this paper, we present ***SOMAJGYAAN***, a Bangla multiple-choice dataset consisting of 4,234 questions, annotated across five levels of difficulty. The questions are drawn from Bangladesh’s National Curriculum and Global Studies textbooks, covering a wide range of domains including History, Geography, Economics, Social Studies, Politics and Law, and Miscellaneous topics. Difficulty levels were assigned by four expert annotators to minimize annotation bias. The experiments reveal that closed-source LLMs perform better than open-source LLMs. While fine-tuning open-source models on improves their performance, they still fall short of matching closed-source LLMs. Our findings highlight the importance of culturally grounded evaluation datasets and task-specific adaptation to improve LLM performance in low-resource language settings.
NLP-DU at SemEval-2025 Task 11: Analyzing Multi-label Emotion Detection
Sadman Sakib
|
Ahaj Faiak
|
Abdullah Ibne Hanif Arean
|
Fariha Anjum Shifa
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper describes NLP-DU’s entry to SemEval-2025 Task 11 on multi-label emotion detection. We investigated the efficacy of transformer-based models and propose an ensemble approach that combines multiple models. Our experiments demonstrate that the ensemble outperforms individual models under the dataset constraints, yielding superior performance on key evaluation metrics. These findings underscore the potential of ensemble techniques in enhancing multi-label emotion detection and contribute to the broader understanding of emotion analysis in natural language processing.