Md. Hoque


2025

pdf bib
CIOL at SemEval-2025 Task 11: Multilingual Pre-trained Model Fusion for Text-based Emotion Recognition
Md. Hoque | Mahfuz Ahmed Anik | Abdur Rahman | Azmine Toushik Wasi
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

Multilingual emotion detection is a critical challenge in natural language processing, enabling applications in sentiment analysis, mental health monitoring, and user engagement. However, existing models struggle with overlapping emotions, intensity quantification, and cross-lingual adaptation, particularly in low-resource languages. This study addresses these challenges as part of SemEval-2025 Task 11 by leveraging language-specific transformer models for multi-label classification (Track A), intensity prediction (Track B), and cross-lingual generalization (Track C). Our models achieved strong performance in Russian (Track A: 0.848 F1, Track B: 0.8594 F1) due to emotion-rich pretraining, while Chinese (0.483 F1) and Spanish (0.6848 F1) struggled with intensity estimation. Track C faced significant cross-lingual adaptation issues, with Russian (0.3102 F1), Chinese (0.2992 F1), and Indian (0.2613 F1) highlighting challenges in low-resource settings. Despite these limitations, our findings provide valuable insights into multilingual emotion detection. Future work should enhance cross-lingual representations, address data scarcity, and integrate multimodal information for improved generalization and real-world applicability.