Yupei Wang
2025
Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?
Zhihui Yang
|
Yupei Wang
|
Kaijie Mo
|
Zhe Zhao
|
Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
Despite significant progress in multimodal language models (LMs), it remains unclear whether visual grounding enhances their understanding of embodied knowledge compared to text-only models. To address this question, we propose a novel embodied knowledge understanding benchmark based on the perceptual theory from psychology, encompassing visual, auditory, tactile, gustatory, olfactory external senses, and interoception. The benchmark assesses the models’ perceptual abilities across different sensory modalities through vector comparison and question-answering tasks with over 1,700 questions. By comparing 30 state-of-the-art LMs, we surprisingly find that vision-language models (VLMs) do not outperform text-only models in either task. Moreover, the models perform significantly worse in the visual dimension compared to other sensory dimensions. Further analysis reveals that the vector representations are easily influenced by word form and frequency, and the models struggle to answer questions involving spatial perception and reasoning. Our findings underscore the need for more effective integration of embodied knowledge in LMs to enhance their understanding of the physical world.
CMT-Eval: A Novel Chinese Multi-turn Dialogue Evaluation Dataset Addressing Real-world Conversational Challenges
Siyu Tian
|
Kaijie Mo
|
Yupei Wang
|
Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025
Multi-turn dialogue is a key paradigm for interaction between users and Large Language Models (LLMs). However, existing evaluation benchmarks fail to capture users’ evolving needs and how their diverse conversation styles affect the dialogue flow. To address these limitations, we propose CMT-Eval, the first dedicated dataset for fine-grained evaluation of Chinese multi-turn dialogue systems. Built upon a linguistic theory-driven Speech Act Framework, diverse user personas, and varied conversational challenges, CMT-Eval comprises 596 high-quality dialogues with 4,431 turns, simulating realistic, multifaceted, and challenging conversations. Experiments reveal that models struggle with specific speech acts, user personas, and complex scenarios, highlighting the effectiveness of CMT-Eval in assessing LLMs’ multi-turn dialogue capabilities and providing valuable insights for their enhancement. The dataset, code, and prompts are available at https://github.com/hejaida/CMT-Eval.
2024
Beyond Agreement: Diagnosing the Rationale Alignment of Automated Essay Scoring Methods based on Linguistically-informed Counterfactuals
Yupei Wang
|
Renfen Hu
|
Zhe Zhao
Findings of the Association for Computational Linguistics: EMNLP 2024
While current Automated Essay Scoring (AES) methods demonstrate high scoring agreement with human raters, their decision-making mechanisms are not fully understood. Our proposed method, using counterfactual intervention assisted by Large Language Models (LLMs), reveals that BERT-like models primarily focus on sentence-level features, whereas LLMs such as GPT-3.5, GPT-4 and Llama-3 are sensitive to conventions & accuracy, language complexity, and organization, indicating a more comprehensive rationale alignment with scoring rubrics. Moreover, LLMs can discern counterfactual interventions when giving feedback on essays. Our approach improves understanding of neural AES methods and can also apply to other domains seeking transparency in model-driven decisions.