SubmissionNumber#=%=#135 FinalPaperTitle#=%=#scaLAR SemEval-2024 Task 1: Semantic Textual Relatednes for English ShortPaperTitle#=%=# NumberOfPages#=%=#5 CopyrightSigned#=%=#Mogilipalem Hemanth Kumar JobTitle#==# Organization#==# Abstract#==#This study investigates Semantic Textual Related- ness (STR) within Natural Language Processing (NLP) through experiments conducted on a dataset from the SemEval-2024 STR task. The dataset comprises train instances with three features (PairID, Text, and Score) and test instances with two features (PairID and Text), where sentence pairs are separated by '/n' in the Text column. Using BERT(sentence transformers pipeline), we explore two approaches: one with fine-tuning (Track A: Supervised) and another without finetuning (Track B: UnSupervised). Fine-tuning the BERT pipeline yielded a Spearman correlation coefficient of 0.803, while without finetuning, a coefficient of 0.693 was attained using cosine similarity. The study concludes by emphasizing the significance of STR in NLP tasks, highlighting the role of pre-trained language models like BERT and Sentence Transformers in enhancing semantic relatedness assessments. Author{1}{Firstname}#=%=#Anand Kumar Author{1}{Lastname}#=%=#M Author{1}{Email}#=%=#m_anandkumar@nitk.edu.in Author{1}{Affiliation}#=%=#NITK Author{2}{Firstname}#=%=#Hemanth Kumar Author{2}{Lastname}#=%=#M Author{2}{Username}#=%=#hemanth955 Author{2}{Email}#=%=#mogilipalemhemanthkumar@gmail.com Author{2}{Affiliation}#=%=#NITK ========== èéáğö