Tsung-Han Wu
2025
Puzzled by Puzzles: When Vision-Language Models Can’t Take a Hint
Heekyung Lee
|
Jiaxin Ge
|
Tsung-Han Wu
|
Minwoo Kang
|
Trevor Darrell
|
David M. Chan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Rebus puzzles, visual riddles that encode language through imagery, spatial arrangement, and symbolic substitution, pose a unique challenge to current vision-language models (VLMs). Unlike traditional image captioning or question answering tasks, rebus solving requires multimodal abstraction, symbolic reasoning, and a grasp of cultural, phonetic and linguistic puns. In this short paper, we investigate the capacity of contemporary VLMs to interpret and solve rebus puzzles by constructing a hand-generated and annotated benchmark of diverse english-language rebus puzzles, ranging from simple pictographic substitutions to spatially-dependent cues (“head” over “heels”). We analyze how different VLMs perform, and our findings reveal that while VLMs exhibit some surprising capabilities in decoding simple visual clues, they struggle significantly with tasks requiring abstract reasoning, lateral thinking, and understanding visual metaphors.