SubmissionNumber#=%=#231 FinalPaperTitle#=%=#Pollice Verso at SemEval-2024 Task 6: The Roman Empire Strikes Back ShortPaperTitle#=%=# NumberOfPages#=%=#8 CopyrightSigned#=%=#Jan Pfister JobTitle#==# Organization#==# Abstract#==#We present an intuitive approach for hallucination detection in LLM outputs that is modeled after how humans would go about this task. We engage several LLM "experts" to independently assess whether a response is hallucinated. For this we select recent and popular LLMs smaller than 7B parameters. By analyzing the log probabilities for tokens that signal a positive or negative judgment, we can determine the likelihood of hallucination. Additionally, we enhance the performance of our "experts" by automatically refining their prompts using the recently introduced OPRO framework. Furthermore, we ensemble the replies of the different experts in a uniform or weighted manner, which builds a quorum from the expert replies. Overall this leads to accuracy improvements of up to 10.6 p.p. compared to the challenge baseline. We show that a Zephyr 3B model is well suited for the task. Our approach can be applied in the model-agnostic and model-aware subtasks without modification and is flexible and easily extendable to related tasks. Author{1}{Firstname}#=%=#Konstantin Author{1}{Lastname}#=%=#Kobs Author{1}{Username}#=%=#konstantinkobs Author{1}{Email}#=%=#kobs@informatik.uni-wuerzburg.de Author{1}{Affiliation}#=%=#Julius-Maximilians University Würzburg Author{2}{Firstname}#=%=#Jan Author{2}{Lastname}#=%=#Pfister Author{2}{Username}#=%=#janpf Author{2}{Email}#=%=#pfister@informatik.uni-wuerzburg.de Author{2}{Affiliation}#=%=#Julius-Maximilians-Universität Würzburg (JMU) Author{3}{Firstname}#=%=#Andreas Author{3}{Lastname}#=%=#Hotho Author{3}{Username}#=%=#hotho Author{3}{Email}#=%=#hotho@informatik.uni-wuerzburg.de Author{3}{Affiliation}#=%=#University of Würzburg ========== èéáğö