Md. Musfique Anwar
Also published as: Md Musfique Anwar
2025
Gen-mABSA-T5: A Multilingual Zero-Shot Generative Framework for Aspect-Based Sentiment Analysis
Shabrina Akter Shahana
|
Nuzhat Nairy Afrin
|
Md Musfique Anwar
|
Israt Jahan
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
Aspect-Based Sentiment Analysis (ABSA) identifies sentiments toward specific aspects of an entity. While progress has been substantial for high-resource languages such as English, low-resource languages like Bangla remain underexplored due to limited annotated data and linguistic challenges. We propose Gen-mABSA-T5, a multilingual zero-shot generative framework for ABSA based on Flan-T5, incorporating prompt engineering and Natural Language Inference (NLI). Without task-specific training, Gen-mABSA-T5 achieves state-of-the-art zero-shot accuracy of 61.56% on the large Bangla corpus, 73.50% on SemEval Laptop, and 73.56% on SemEval Restaurant outperforming both English and Bangla task-specific models in zero-shot settings. It delivers reasonable performance against very large general-purpose models on both English and Bangla benchmarks. These results highlight the effectiveness of generative, zero-shot approaches for ABSA in low-resource and multilingual settings.
A Comprehensive Text Optimization Approach to Bangla Summarization
Irtifa Haider
|
Shanjida Alam
|
Md. Tazel Hossan
|
Md. Musfique Anwar
|
Tanjim Taharat Aurpa
Proceedings of the Second Workshop on Bangla Language Processing (BLP-2025)
The task of Bengali text optimization demands not only the generation of concise and coherent summaries but also grammatical accuracy, semantic appropriateness, and factual reliability. This study presents a dual-phase optimization framework for Bengali text summarization that integrates entity-preserving preprocessing and abstractive generation with mT5, followed by refinement through sentence ranking, entity consistency enforcement, and optimization with instruction-tuned LLMs such as mBART. Evaluations using ROUGE, BLEU,BERTScore, and human ratings of fluency, adequacy, coherence, and readability show consistent gains over baseline summarizers. By embedding grammatical and factual safe guards into the summarization pipeline, this study establishes a robust and scalable benchmark for Bengali NLP, advancing text optimization research. Our model achieves 0.54 ROUGE-1 and 0.88 BERTScore on BANSData, outperforming recent multilingual baselines.
Search
Fix author
Co-authors
- Nuzhat Nairy Afrin 1
- Shanjida Alam 1
- Tanjim Taharat Aurpa 1
- Irtifa Haider 1
- Md. Tazel Hossan 1
- show all...