SubmissionNumber#=%=#161 FinalPaperTitle#=%=#MAMET at SemEval-2024 Task 7: Supervised Enhanced Reasoning Agent Model ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Mahmood Kalantari JobTitle#==# Organization#==# Abstract#==#In the intersection of language understanding and numerical reasoning, a formidable challenge arises in natural language processing (NLP). Our study delves into the realm of NumEval, focusing on numeral-aware language understanding and generation using the QP, QQA and QNLI datasets\footnote. We harness the potential of the Orca2 model, Fine-tuning it in both normal and Chain-of-Thought modes with prompt tuning to enhance accuracy. Despite initial conjectures, our findings reveal intriguing disparities in model performance. While standard training methodologies yield commendable accuracy rates. The core contribution of this work lies in its elucidation of the intricate interplay between dataset sequencing and model performance. We expected to achieve a general model with the Fine Tuning model on the QP and QNLI datasets respectively, which has good accuracy in all three datasets. However, this goal was not achieved, and in order to achieve this goal, we introduce our structure. Author{1}{Firstname}#=%=#Mahmood Author{1}{Lastname}#=%=#Kalantari Author{1}{Username}#=%=#mahmood1998 Author{1}{Email}#=%=#mahmood.kalantari76@gmail.com Author{1}{Affiliation}#=%=#Iran University of Science & Technology Author{2}{Firstname}#=%=#Mehdi Author{2}{Lastname}#=%=#Feghhi Author{2}{Email}#=%=#feghhi_me@comp.iust.ac.ir Author{2}{Affiliation}#=%=#Iran University of Science & Technology Author{3}{Firstname}#=%=#Taha Author{3}{Lastname}#=%=#Khany Alamooti Author{3}{Email}#=%=#khany_taha@comp.iust.ac.ir Author{3}{Affiliation}#=%=#Iran University of Science & Technology ========== èéáğö