SubmissionNumber#=%=#44 FinalPaperTitle#=%=#HIJLI_JU at SemEval-2024 Task 7: Enhancing Quantitative Question Answering Using Fine-tuned BERT Models ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Sandip Sarkar JobTitle#==# Organization#==#Hijli College, Kharagpur Abstract#==#In data and numerical analysis, Quantitative Question Answering (QQA) becomes a crucial instrument that provides deep insights for analyzing large datasets and helps make well-informed decisions in industries such as finance, healthcare, and business. This paper explores the "HIJLI_JU" team's involvement in NumEval Task 1 within SemEval 2024, with a particular emphasis on quantitative comprehension. Specifically, our method addresses numerical complexities by fine-tuning a BERT model for sophisticated multiple-choice question answering, leveraging the Hugging Face ecosystem. The effectiveness of our QQA model is assessed using a variety of metrics, with an emphasis on the f1\_score() of the scikit-learn library. Thorough analysis of the macro-F1, micro-F1, weighted-F1, average, and binary-F1 scores yields detailed insights into the model's performance in a range of question formats. Author{1}{Firstname}#=%=#Partha Sarathi Author{1}{Lastname}#=%=#Sengupta Author{1}{Email}#=%=#jitendriyo@gmail.com Author{1}{Affiliation}#=%=#Jadavpur University Author{2}{Firstname}#=%=#Sandip Author{2}{Lastname}#=%=#Sarkar Author{2}{Username}#=%=#sandip16911 Author{2}{Email}#=%=#sandipsarkar.ju@gmail.com Author{2}{Affiliation}#=%=#Hijli College Author{3}{Firstname}#=%=#Dipankar Author{3}{Lastname}#=%=#Das Author{3}{Username}#=%=#dipankar.dipnil2005 Author{3}{Email}#=%=#dipankar.dipnil2005@gmail.com Author{3}{Affiliation}#=%=#Jadavpur University ========== èéáğö