SubmissionNumber#=%=#192 FinalPaperTitle#=%=#NumDecoders at SemEval-2024 Task 7: FlanT5 and GPT enhanced with CoT for Numerical Reasoning ShortPaperTitle#=%=# NumberOfPages#=%=#9 CopyrightSigned#=%=#Andres Gonzalez JobTitle#==# Organization#==#University of Lorraine, Nancy, France Abstract#==#In this paper we present a Chain-of-Thought enhanced solution for large language models, including flanT5 and GPT 3.5 Turbo, aimed at solving mathematical problems to fill in blanks from news headlines. Our approach builds on a data augmentation strategy that incorporates additional mathematical reasoning observations into the original dataset sourced from another mathematical corpus. Both automatic and manual annotations are applied to explicitly describe the reasoning steps required for models to reach the target answer. We employ an ensemble majority voting method to generate final predictions across our best-performing models. Our analysis reveals that while larger models trained with our enhanced dataset achieve significant gains (91% accuracy, ranking 5th on the NumEval Task 3 leaderboard), smaller models do not experience improvements and may even see a decrease in overall accuracy. We conclude that improving our automatic an- notations via crowdsourcing methods can be a worthwhile endeavor to train larger models than the ones from this study to see the most accurate results. Author{1}{Firstname}#=%=#Andres Author{1}{Lastname}#=%=#Gonzalez Author{1}{Username}#=%=#zappangon Author{1}{Email}#=%=#nanoandres_24@hotmail.com Author{1}{Affiliation}#=%=#University of Lorraine Author{2}{Firstname}#=%=#Md Zobaer Author{2}{Lastname}#=%=#Hossain Author{2}{Email}#=%=#rowan.hossain@gmail.com Author{2}{Affiliation}#=%=#University of Lorraine Author{3}{Firstname}#=%=#Jahedul Alam Author{3}{Lastname}#=%=#Junaed Author{3}{Email}#=%=#jahedul25@student.sust.edu Author{3}{Affiliation}#=%=#Shahjalal University of Science and Technology ========== èéáğö