Leveraging Large Models to Evaluate Novel Content: A Case Study on Advertisement Creativity

Zhaoyi Joey Hou, Adriana Kovashka, Xiang Lorraine Li


Abstract
Evaluating creativity is challenging, even for humans, not only because of its subjectivity but also because it involves complex cognitive processes. Inspired by work in marketing, we attempt to break down visual advertisement creativity into atypicality and originality. With fine-grained human annotations on these dimensions, we propose a suite of tasks specifically for such a subjective problem. We also evaluate the alignment between state-of-the-art (SoTA) vision language models (VLMs) and humans on our proposed benchmark, demonstrating both the promises and challenges of using VLMs for automatic creativity assessment.
Anthology ID:
2025.emnlp-main.1072
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
21169–21188
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1072/
DOI:
Bibkey:
Cite (ACL):
Zhaoyi Joey Hou, Adriana Kovashka, and Xiang Lorraine Li. 2025. Leveraging Large Models to Evaluate Novel Content: A Case Study on Advertisement Creativity. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 21169–21188, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Leveraging Large Models to Evaluate Novel Content: A Case Study on Advertisement Creativity (Hou et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1072.pdf
Checklist:
 2025.emnlp-main.1072.checklist.pdf