SubmissionNumber#=%=#219 FinalPaperTitle#=%=#Compos Mentis at SemEval2024 Task6: A Multi-Faceted Role-based Large Language Model Ensemble to Detect Hallucination ShortPaperTitle#=%=# NumberOfPages#=%=#6 CopyrightSigned#=%=#Souvik Das JobTitle#==# Organization#==# Abstract#==#Hallucinations in large language models (LLMs), where they generate fluent but factually incorrect outputs, pose challenges for applications requiring strict truthfulness. This work proposes a multi-faceted approach to detect such hallucinations across various language tasks. We leverage automatic data annotation using a proprietary LLM, fine-tuning of the Mistral-7B-instruct-v0.2 model on annotated and benchmark data, role-based and rationale-based prompting strategies, and an ensemble method combining different model outputs through majority voting. This comprehensive framework aims to improve the robustness and reliability of hallucination detection for LLM generations. Author{1}{Firstname}#=%=#Souvik Author{1}{Lastname}#=%=#Das Author{1}{Username}#=%=#souvikda Author{1}{Email}#=%=#souvikda@buffalo.edu Author{1}{Affiliation}#=%=#University at Buffalo Author{2}{Firstname}#=%=#Rohini K. Author{2}{Lastname}#=%=#Srihari Author{2}{Username}#=%=#rohini Author{2}{Email}#=%=#rohini@buffalo.edu Author{2}{Affiliation}#=%=#University at Buffalo, SUNY ========== èéáğö