Tengxiao Lv
2025
DUT_IR at SemEval-2025 Task 11: Enhancing Multi-Label Emotion Classification with an Ensemble of Pre-trained Language Models and Large Language Models
Chao Liu
|
Junliang Liu
|
Tengxiao Lv
|
Huayang Li
|
Tao Zeng
|
Ling Luo
|
Yuanyuan Sun
|
Hongfei Lin
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
In this work, we tackle the challenge of multi-label emotion classification, where a sentence can simultaneously express multiple emotions. This task is particularly difficult due to the overlapping nature of emotions and the limited context available in short texts. To address these challenges, we propose an ensemble approach that integrates Pre-trained Language Models (BERT-based models) and Large Language Models, each capturing distinct emotional cues within the text. The predictions from these models are aggregated through a voting mechanism, enhancing classification accuracy. Additionally, we incorporate threshold optimization and class weighting techniques to mitigate class imbalance. Our method demonstrates substantial improvements over baseline models. Our approach ranked 4th out of 90 on the English leaderboard and exhibited strong performance in English in SemEval-2025 Task 11 Track A.
DUTIR at SemEval-2025 Task 10: A Large Language Model-based Approach for Entity Framing in Online News
Tengxiao Lv
|
Juntao Li
|
Chao Liu
|
Yiyang Kang
|
Ling Luo
|
Yuanyuan Sun
|
Hongfei Lin
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
We propose a multilingual text processing framework that combines multilingual translation with data augmentation, QLoRA-based multi-model fine-tuning, and GLM-4-Plus-based ensemble classification. By using GLM-4-Plus to translate multilingual texts into English, we enhance data diversity and quantity. Data augmentation effectively improves the model’s performance on imbalanced datasets. QLoRA fine-tuning optimizes the model and reduces classification loss. GLM-4-Plus, as a meta-classifier, further enhances system performance. Our system achieved first place in three languages (English, Portuguese and Russian).
Search
Fix author
Co-authors
- Hongfei Lin (林鸿飞) 2
- Chao Liu 2
- Ling Luo 2
- Yuanyuan Sun (孙媛媛) 2
- Yiyang Kang 1
- show all...