SubmissionNumber#=%=#73 FinalPaperTitle#=%=#YNU-HPCC at SemEval-2024 Task 9: Using Pre-trained Language Models with LoRA for Multiple-choice Answering Tasks ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Jie Wang JobTitle#==# Organization#==# Abstract#==#This study describes the model built in Task 9: brainteaser in the SemEval-2024 competition, which is a multiple-choice task. As active participants in Task 9, our system strategically employs the decoding-enhanced BERT (DeBERTa) architecture enriched with disentangled attention mechanisms. Additionally, we fine-tuned our model using low-rank adaptation (LoRA) to optimize its performance further. Moreover, we integrate focal loss into our framework to address label imbalance issues. The systematic integration of these techniques has resulted in outstanding performance metrics. Upon evaluation using the provided test dataset, our system showcases commendable results, with a remarkable accuracy score of 0.9 for subtask 1, positioning us fifth among all participants. Similarly, for subtask 2, our system exhibits a substantial accuracy rate of 0.781, securing a commendable seventh-place ranking. The code for this paper is published at: https://github.com/123yunnandaxue/Semveal-2024_task9. Author{1}{Firstname}#=%=#Jie Author{1}{Lastname}#=%=#Wang Author{1}{Username}#=%=#jiew10086 Author{1}{Email}#=%=#wangjie_qpqj@stu.ynu.edu.cn Author{1}{Affiliation}#=%=#Yunnan University Author{2}{Firstname}#=%=#Jin Author{2}{Lastname}#=%=#Wang Author{2}{Username}#=%=#wangjin0818 Author{2}{Email}#=%=#wangjin@ynu.edu.cn Author{2}{Affiliation}#=%=#Yunnan University Author{3}{Firstname}#=%=#Xuejie Author{3}{Lastname}#=%=#Zhang Author{3}{Username}#=%=#xjzhang Author{3}{Email}#=%=#xjzhang@ynu.edu.cn Author{3}{Affiliation}#=%=#Yunnan University ========== èéáğö