SubmissionNumber#=%=#200 FinalPaperTitle#=%=#HaRMoNEE at SemEval-2024 Task 6: Tuning-based Approaches to Hallucination Recognition ShortPaperTitle#=%=# NumberOfPages#=%=#10 CopyrightSigned#=%=#Timothy Obiso JobTitle#==# Organization#==#Brandeis University 415 South St, Waltham, MA 02453 Abstract#==#This paper presents the Hallucination Recognition Model for New Experiment Evaluation (HaRMoNEE) team's winning (#1) and #10 submissions for SemEval-2024 Task 6: Shared- task on Hallucinations and Related Observable Overgeneration Mistakes (SHROOM)'s two subtasks. This task challenged its participants to design systems to detect hallucinations in Large Language Model (LLM) outputs. Team HaRMoNEE proposes two architectures: (1) fine-tuning an off-the-shelf transformer-based model and (2) prompt tuning large-scale Large Language Models (LLMs). One submission from the fine-tuning approach outperformed all other submissions for the model-aware subtask; one submission from the prompt-tuning approach is the 10th-best submission on the leaderboard for the model-agnostic subtask. Our systems also include pre-processing, system-specific tuning, post-processing, and evaluation. Author{1}{Firstname}#=%=#Timothy Author{1}{Lastname}#=%=#Obiso Author{1}{Username}#=%=#tobiso Author{1}{Email}#=%=#timothyobiso@brandeis.edu Author{1}{Affiliation}#=%=#Brandeis University Author{2}{Firstname}#=%=#Jingxuan Author{2}{Lastname}#=%=#Tu Author{2}{Username}#=%=#jxtu Author{2}{Email}#=%=#jxtu@brandeis.edu Author{2}{Affiliation}#=%=#Brandeis University Author{3}{Firstname}#=%=#James Author{3}{Lastname}#=%=#Pustejovsky Author{3}{Username}#=%=#jamesp Author{3}{Email}#=%=#jamesp@cs.brandeis.edu Author{3}{Affiliation}#=%=#Brandeis University ========== èéáğö