Kun Lu
2025
NCL-NLP at SemEval-2025 Task 11: Using Prompting engineering framework and Low Rank Adaptation of Large Language Models for Multi-label Emotion Detection
Kun Lu
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)
The paper presented a prompt engineer framework to further improve the performance of generative models on multi-label classification tasks which released in SemEval-2025 Task 11 Track A. This task is used to predict the presence of all emotions contained in a text segment, namely joy, fear, anger, surprise, and sadness. The generative large language model, fine-tuned with instructions, can accomplish multi-label classification tasks to a certain extent; however, there is still room for improvement in its correctness and accuracy. To address these problems, we proposed a framework for prompt engineering to further enhance performance, while using the specifications of instruction fine-tuning to generate the model’s response results. Compared to the method of fine-tuning using simple instructions, our system improved the overall macro F1 score by 0.3. There has been a significant improvement in the accuracy of each individual category. In the final ranking, a good performance was achieved. Nevertheless, the system still has certain issues, as the results of local validation may differ from the results of official competitions. This could be due to the training samples being insufficient and unbalanced. Therefore, the system can still improve its performance through feature engineering and other data enhancement methods.
2016
Recognizing Reference Spans and Classifying their Discourse Facets
Kun Lu
|
Jin Mao
|
Gang Li
|
Jian Xu
Proceedings of the Joint Workshop on Bibliometric-enhanced Information Retrieval and Natural Language Processing for Digital Libraries (BIRNDL)