Are Large Language Models Chronically Online Surfers? A Dataset for Chinese Internet Meme Explanation

Yubo Xie, Chenkai Wang, Zongyang Ma, Fahui Miao


Abstract
Large language models (LLMs) are trained on vast amounts of text from the Internet, but do they truly understand the viral content that rapidly spreads online—commonly known as memes? In this paper, we introduce CHIME, a dataset for CHinese Internet Meme Explanation. The dataset comprises popular phrase-based memes from the Chinese Internet, annotated with detailed information on their meaning, origin, example sentences, types, etc. To evaluate whether LLMs understand these memes, we designed two tasks. In the first task, we assessed the models’ ability to explain a given meme, identify its origin, and generate appropriate example sentences. The results show that while LLMs can explain the meanings of some memes, their performance declines significantly for culturally and linguistically nuanced meme types. Additionally, they consistently struggle to provide accurate origins for the memes. In the second task, we created a set of multiple-choice questions (MCQs) requiring LLMs to select the most appropriate meme to fill in a blank within a contextual sentence. While the evaluated models were able to provide correct answers, their performance remains noticeably below human levels. We have made CHIME public and hope it will facilitate future research on computational meme understanding.
Anthology ID:
2025.emnlp-main.863
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17073–17094
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.863/
DOI:
Bibkey:
Cite (ACL):
Yubo Xie, Chenkai Wang, Zongyang Ma, and Fahui Miao. 2025. Are Large Language Models Chronically Online Surfers? A Dataset for Chinese Internet Meme Explanation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 17073–17094, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Are Large Language Models Chronically Online Surfers? A Dataset for Chinese Internet Meme Explanation (Xie et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.863.pdf
Checklist:
 2025.emnlp-main.863.checklist.pdf