Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating Large Language Models

Sherzod Hakimov, Lara Pfennigschmidt, David Schlangen


Abstract
This study utilizes the game Codenames as a benchmarking tool to evaluate large language models (LLMs) with respect to specific linguistic and cognitive skills. LLMs play each side of the game, where one side generates a clue word covering several target words and the other guesses those target words. We designed various experiments by controlling the choice of words (abstract vs. concrete words, ambiguous vs. monosemic) or the opponent (programmed to be faster or slower in revealing words). Recent commercial and open-weight models were compared side-by-side to find out factors affecting their performance. The evaluation reveals details about their strategies, challenging cases, and limitations of LLMs.
Anthology ID:
2025.gem-1.63
Volume:
Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²)
Month:
July
Year:
2025
Address:
Vienna, Austria and virtual meeting
Editors:
Kaustubh Dhole, Miruna Clinciu
Venues:
GEM | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
728–740
Language:
URL:
https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.63/
DOI:
Bibkey:
Cite (ACL):
Sherzod Hakimov, Lara Pfennigschmidt, and David Schlangen. 2025. Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating Large Language Models. In Proceedings of the Fourth Workshop on Generation, Evaluation and Metrics (GEM²), pages 728–740, Vienna, Austria and virtual meeting. Association for Computational Linguistics.
Cite (Informal):
Ad-hoc Concept Forming in the Game Codenames as a Means for Evaluating Large Language Models (Hakimov et al., GEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-08/2025.gem-1.63.pdf