Blinded by Context: Unveiling the Halo Effect of MLLM in AI Hiring

Kyusik Kim, Jeongwoo Ryu, Hyeonseok Jeon, Bongwon Suh


Abstract
This study investigates the halo effect in AI-driven hiring evaluations using Large Language Models (LLMs) and Multimodal Large Language Models (MLLMs). Through experiments with hypothetical job applications, we examined how these models’ evaluations are influenced by non-job-related information, including extracurricular activities and social media images. By analyzing models’ responses to Likert-scale questions across different competency dimensions, we found that AI models exhibit significant halo effects, particularly in image-based evaluations, while text-based assessments showed more resistance to bias. The findings demonstrate that supplementary multimodal information can substantially influence AI hiring decisions, highlighting potential risks in AI-based recruitment systems.
Anthology ID:
2025.findings-acl.1338
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
26067–26113
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1338/
DOI:
Bibkey:
Cite (ACL):
Kyusik Kim, Jeongwoo Ryu, Hyeonseok Jeon, and Bongwon Suh. 2025. Blinded by Context: Unveiling the Halo Effect of MLLM in AI Hiring. In Findings of the Association for Computational Linguistics: ACL 2025, pages 26067–26113, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Blinded by Context: Unveiling the Halo Effect of MLLM in AI Hiring (Kim et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.1338.pdf