SubmissionNumber#=%=#252 FinalPaperTitle#=%=#Challenges at SemEval 2024 Task 7: Contrastive Learning Approach on Numeral-Aware Language Generation ShortPaperTitle#=%=# NumberOfPages#=%=#4 CopyrightSigned#=%=#Hao-Yun Chuang JobTitle#==# Organization#==#Ali Zhunis, Department of Linguistics, University of Tübingen, Tübingen, Germany Hao-Yun Chuang, Graduate Institute of Linguistics, National Chengchi University, Taipei, Taiwan Abstract#==#Although Large Language Model (LLM) excels on generating headline on ROUGE evaluation, it still fails to reason number and generate news article headline with accurate number. Attending SemEval-2024 Task 7 subtask 3, our team aims on using contrastive loss to increase the understanding of the number from their different expression, and knows to identify between different number and its respective expression. This system description paper uses T5 and BART as the baseline model, comparing its result with and without the constrative loss. The result shows that BART with contrastive loss have excelled all the models, and its performance on the number accuracy has the highest performance among all. Author{1}{Firstname}#=%=#Hao-Yun Author{1}{Lastname}#=%=#Chuang Author{1}{Username}#=%=#milanochuang Author{1}{Email}#=%=#110555010@nccu.edu.tw Author{1}{Affiliation}#=%=#Graduate Institute of Linguistics, National Chengchi University Author{2}{Firstname}#=%=#Ali Author{2}{Lastname}#=%=#Zhunis Author{2}{Username}#=%=#ali_zhunis Author{2}{Email}#=%=#ali.zhunis@student.uni-tuebingen.de Author{2}{Affiliation}#=%=#student ========== èéáğö