SubmissionNumber#=%=#75 FinalPaperTitle#=%=#HU at SemEval-2024 Task 8A: Can Contrastive Learning Learn Embeddings to Detect Machine-Generated Text? ShortPaperTitle#=%=# NumberOfPages#=%=#7 CopyrightSigned#=%=#Shubhashis Roy Dipta JobTitle#==# Organization#==#University of Maryland, Baltimore County 1000 Hilltop Cir, Baltimore, MD 21250 Abstract#==#This paper describes our system developed for SemEval-2024 Task 8, "Multigenerator, Multidomain, and Multilingual Black-Box Machine-Generated Text Detection." Machine-generated texts have been one of the main concerns due to the use of large language models (LLM) in fake text generation, phishing, cheating in exams, or even plagiarizing copyright materials. A lot of systems have been developed to detect machine-generated text. Nonetheless, the majority of these systems rely on the text-generating model. This limitation is impractical in real-world scenarios, as it's often impossible to know which specific model the user has used for text generation. In this work, we propose a single model based on contrastive learning, which uses ~40% of the baseline's parameters (149M vs. 355M) but shows a comparable performance on the test dataset (21st out of 137 participants). Our key finding is that even without an ensemble of multiple models, a single base model can have comparable performance with the help of data augmentation and contrastive learning. Author{1}{Firstname}#=%=#Shubhashis Author{1}{Lastname}#=%=#Roy Dipta Author{1}{Username}#=%=#dipta007 Author{1}{Email}#=%=#sroydip1@umbc.edu Author{1}{Affiliation}#=%=#University of Maryland, Baltimore County Author{2}{Firstname}#=%=#Sadat Author{2}{Lastname}#=%=#Shahriar Author{2}{Username}#=%=#sadatuh1971 Author{2}{Email}#=%=#sadat.shrr@gmail.com Author{2}{Affiliation}#=%=#University of Houston ========== èéáğö