SubmissionNumber#=%=#40 FinalPaperTitle#=%=#NU-RU at SemEval-2024 Task 6: Hallucination and Related Observable Overgeneration Mistake Detection Using Hypothesis-Target Similarity and SelfCheckGPT ShortPaperTitle#=%=# NumberOfPages#=%=#8 CopyrightSigned#=%=#Thanet Markchom JobTitle#==# Organization#==# Abstract#==#One of the key challenges in Natural Language Generation (NLG) is "hallucination," in which the generated output appears fluent and grammatically sound but may contain incorrect information. To address this challenge, ``SemEval-2024 Task 6 - SHROOM, a Shared-task on Hallucinations and Related Observable Overgeneration Mistakes'' is introduced. This task focuses on detecting overgeneration hallucinations in texts generated from Large Language Models for various NLG tasks. To tackle this task, this paper proposes two methods: (1) hypothesis-target similarity, which measures text similarity between a generated text (hypothesis) and an intended reference text (target), and (2) a SelfCheckGPT-based method to assess hallucinations via predefined prompts designed for different NLG tasks. Experiments were conducted on the dataset provided in this task. The results show that both of the proposed methods can effectively detect hallucinations in LLM-generated texts with a possibility for improvement. Author{1}{Firstname}#=%=#Thanet Author{1}{Lastname}#=%=#Markchom Author{1}{Username}#=%=#thanet.mar Author{1}{Email}#=%=#thanet.mar@gmail.com Author{1}{Affiliation}#=%=#University of Reading Author{2}{Firstname}#=%=#Subin Author{2}{Lastname}#=%=#Jung Author{2}{Email}#=%=#s.jung4@newcastle.ac.uk Author{2}{Affiliation}#=%=#Newcastle University Author{3}{Firstname}#=%=#Huizhi Author{3}{Lastname}#=%=#Liang Author{3}{Email}#=%=#huizhi.liang@newcastle.ac.uk Author{3}{Affiliation}#=%=#Newcastle University ========== èéáğö