Shian Jia
2025
LIME: Less Is More for MLLM Evaluation
King Zhu
|
Qianbo Zang
|
Shian Jia
|
Siwei Wu
|
Feiteng Fang
|
Yizhi Li
|
Shuyue Guo
|
Tianyu Zheng
|
Jiawei Guo
|
Bo Li
|
Haoning Wu
|
Xingwei Qu
|
Jian Yang
|
Ruibo Liu
|
Xiang Yue
|
Jiaheng Liu
|
Chenghua Lin
|
Hamid Alinejad-Rokny
|
Min Yang
|
Shiwen Ni
|
Wenhao Huang
|
Ge Zhang
Findings of the Association for Computational Linguistics: ACL 2025
Multimodal Large Language Models (MLLMs) are measured on numerous benchmarks like image captioning, visual question answer, and reasoning. However, these benchmarks often include overly simple or uninformative samples, making it difficult to effectively distinguish the performance of different MLLMs. Additionally, evaluating models across many benchmarks creates a significant computational burden. To address these issues, we propose LIME (Less Is More for MLLM Evaluation), a refined and efficient benchmark curated using a semi-automated pipeline. This pipeline filters out uninformative samples and eliminates answer leakage by focusing on tasks that require image-based understanding. Our experiments show that LIME reduces the number of samples by 76% and evaluation time by 77%, while it can more effectively distinguish different models’ abilities. Notably, we find that traditional automatic metrics like CIDEr are insufficient for evaluating MLLMs’ captioning performance, and excluding the caption task score yields a more accurate reflection of overall model performance. All code and data are available at https://anonymous.4open.science/r/LIME-49CD
Search
Fix author
Co-authors
- Hamid Alinejad-Rokny 1
- Feiteng Fang 1
- Shuyue Guo 1
- Jiawei Guo 1
- Wenhao Huang 1
- show all...