MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models

Bohan Jin, Shuhan Qi, Kehai Chen, Xinyi Guo, Xuan Wang


Abstract
The widespread use of Large Multimodal Models (LMMs) has raised concerns about model toxicity. However, current research mainly focuses on explicit toxicity, with less attention to some more implicit toxicity regarding prejudice and discrimination. To address this limitation, we introduce a subtler type of toxicity named dual-implicit toxicity and a novel toxicity benchmark termed MDIT-Bench: Multimodal Dual-Implicit Toxicity Benchmark. Specifically, we first create the MDIT-Dataset with dual-implicit toxicity using the proposed Multi-stage Human-in-loop In-context Generation method. Based on this dataset, we construct the MDIT-Bench, a benchmark for evaluating the sensitivity of models to dual-implicit toxicity, with 317,638 questions covering 12 categories, 23 subcategories, and 780 topics. MDIT-Bench includes three difficulty levels, and we propose a metric to measure the toxicity gap exhibited by the model across them. In the experiment, we conducted MDIT-Bench on 13 prominent LMMs, and the results show that these LMMs cannot handle dual-implicit toxicity effectively. The model’s performance drops significantly in hard level, revealing that these LMMs still contain a significant amount of hidden but activatable toxicity. The data will be released upon the paper’s acceptance.
Anthology ID:
2025.findings-acl.650
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venues:
Findings | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
12552–12574
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.650/
DOI:
Bibkey:
Cite (ACL):
Bohan Jin, Shuhan Qi, Kehai Chen, Xinyi Guo, and Xuan Wang. 2025. MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 12552–12574, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
MDIT-Bench: Evaluating the Dual-Implicit Toxicity in Large Multimodal Models (Jin et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.findings-acl.650.pdf