2025
pdf
bib
abs
Evaluating Credibility and Political Bias in LLMs for News Outlets in Bangladesh
Tabia Tanzin Prama
|
Md. Saiful Islam
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Large language models (LLMs) are widelyused in search engines to provide direct an-swers, while AI chatbots retrieve updated infor-mation from the web. As these systems influ-ence how billions access information, evaluat-ing the credibility of news outlets has becomecrucial. We audit nine LLMs from OpenAI,Google, and Meta to assess their ability to eval-uate the credibility and political bias of the top20 most popular news outlets in Bangladesh.While most LLMs rate the tested outlets, largermodels often refuse to rate sources due to in-sufficient information, while smaller modelsare more prone to hallucinations. We create adataset of credibility ratings and political iden-tities based on journalism experts’ opinions andcompare these with LLM responses. We findstrong internal consistency in LLM credibil-ity ratings, with an average correlation coeffi-cient (ρ) of 0.72, but moderate alignment withexpert evaluations, with an average ρ of 0.45.Most LLMs (GPT-4, GPT-4o-mini, Llama 3.3,Llama-3.1-70B, Llama 3.1 8B, and Gemini 1.5Pro) in their default configurations favor theleft-leaning Bangladesh Awami League, givinghigher credibility ratings, and show misalign-ment with human experts. These findings high-light the significant role of LLMs in shapingnews and political information
pdf
bib
abs
LLMs for Low-Resource Dialect Translation Using Context-Aware Prompting: A Case Study on Sylheti
Tabia Tanzin Prama
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
Large Language Models (LLMs) have demonstrated strong translation abilities through prompting, even without task-specific training. However, their effectiveness in dialectal and low-resource contexts remains underexplored. This study presents the first systematic investigation of LLM-based Machine Translation (MT) for Sylheti, a dialect of Bangla that is itself low-resource. We evaluate five advanced LLMs (GPT-4.1, GPT-4.1-mini, LLaMA 4, Grok 3, and Deepseek V3.2) across both translation directions (Bangla ↔ Sylheti), and find that these models struggle with dialect-specific vocabulary. To address this, we introduce Sylheti-CAP (Context-Aware Prompting), a three-step framework that embeds a linguistic rulebook, dictionary (core vocabulary and idioms), and authenticity check directly into prompts. Extensive experiments show that Sylheti-CAP consistently improves translation quality across models and prompting strategies. Both automatic metrics and human evaluations confirm its effectiveness, while qualitative analysis reveals notable reductions in hallucinations, ambiguities, and awkward phrasing—establishing Sylheti-CAP as a scalable solution for dialectal and low-resource MT.
pdf
bib
abs
Computational Story Lab at BLP-2025 Task 1: HateSense: A Multi-Task Learning Framework for Comprehensive Hate Speech Identification using LLMs
Tabia Tanzin Prama
|
Christopher M. Danforth
|
Peter Dodds
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
This paper describes HateSense, our multi-task learning framework for the BLP 2025 shared task 1 on Bangla hate speech identification. The task requires not only detecting hate speech but also classifying its type, target, and severity. HateSense integrates binary and multi-label classifiers using both encoder- and decoder-based large language models (LLMs). We experimented with pre-trained encoder models (Bert based models), and decoder models like GPT-4.0, LLaMA 3.1 8B, and Gemma-2 9B. To address challenges such as class imbalance and the linguistic complexity of Bangla, we employed techniques like focal loss and odds ratio preference optimization (ORPO). Experimental results demonstrated that the pre-trained encoders (BanglaBert) achieved state-of-the-art performance. Among different prompting strategies, chain-of-thought (CoT) combined with few-shot prompting proved most effective. Following the HateSense framework, our system attained competitive micro-F1 scores: 0.741 (Task 1A), 0.724 (Task 1B), and 0.7233 (Task 1C). These findings affirm the effectiveness of transformer-based architectures for Bangla hate speech detection and suggest promising avenues for multi-task learning in low-resource languages.
pdf
bib
abs
BanglaMATH : A Bangla benchmark dataset for testing LLM mathematical reasoning at grades 6, 7, and 8
Tabia Tanzin Prama
|
Christopher M. Danforth
|
Peter Dodds
Proceedings of The 3rd Workshop on Mathematical Natural Language Processing (MathNLP 2025)
Large Language Models (LLMs) have tremendous potential to play a key role in supporting mathematical reasoning, with growing use in education and AI research. However, most existing benchmarks are limited to English, creating a significant gap for low-resource languages. For example, Bangla is spoken by nearly 250 million people who would collectively benefit from LLMs capable of native fluency. To address this, we present BanglaMATH, a dataset of 1.7k Bangla math word problems across topics such as Arithmetic, Algebra, Geometry, and Logical Reasoning, sourced from Bangla elementary school workbooks and annotated with details like grade level and number of reasoning steps. We have designed BanglaMATH to evaluate the mathematical capabilities of both commercial and open-source LLMs in Bangla, and we find that Gemini 2.5 Flash and DeepSeek V3 are the only models to achieve strong performance, with ≥ 80% accuracy across three elementary school grades. Furthermore, we assess the robustness and language bias of these top-performing LLMs by augmenting the original problems with distracting information, and translating the problems into English. We show that both LLMs fail to maintain robustness and exhibit significant performance bias in Bangla. Our study underlines current limitations of LLMs in handling arithmetic and mathematical reasoning in low-resource languages, and highlights the need for further research on multilingual and equitable mathematical understanding.