Chuen Shin Yong
2025
OZemi at SemEval-2025 Task 11: Multilingual Emotion Detection and Intensity
Hidetsune Takahashi
|
Sumiko Teng
|
Jina Lee
|
Wenxiao Hu
|
Rio Obe
|
Chuen Shin Yong
|
Emily Ohman
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper presents the OZemi team’s submission to SemEval-2025 Task 11: Multilingual Emotion Detection and Intensity. Our approach prioritized computational efficiency, leveraging lightweight models that achieved competitive results even for low-resource languages. We addressed data imbalance through data augmentation techniques such as back translation and class balancing. Our system utilized multilingual BERT and machine translation to enhance performance across 35 languages. Despite ranking mid-tier overall, our results demonstrate that relatively simple models can yield adequate performance across diverse linguistic settings. We provide an error analysis of emotion classification challenges, particularly for nuanced expressions such as sarcasm and irony, and discuss the impact of emoji representation on model predictions. Finally, we outline future directions, including improvements in sentiment intensity modeling and the integration of semantic prosody to refine emotion detection.
ChuenSumi at SemEval-2025 Task 1: Sentence Transformer Models and Processing Idiomacity
Sumiko Teng
|
Chuen Shin Yong
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
This paper participates Task 1 of SemEval2025, specifically Subtask A’s English Text-Only track, where we develop a model to rank text descriptions of images with respect to how well it represents a the use of a given multi-word expression in its respective context sentence. We trained sentence transformer models from huggingface to rank the text descriptions, finding the RoBERTa model to be the better performing model. For the final evaluation, the fine-tuned RoBERTa model achieved an accuracy of 0.4 for the first developer’s evaluation set, and 0.2 for the second, ranking 9th in the English Text Only category for Subtask A. Overall, our results show that a vanilla sentence transformerapproach performs adequately in the task and processing idioms. They also suggest that RoBERTa models may be stronger in idiom processing than other models.
Search
Fix author
Co-authors
- Sumiko Teng 2
- Wenxiao Hu 1
- Jina Lee 1
- Rio Obe 1
- Hidetsune Takahashi 1
- show all...