Kaijie Mo


2025

pdf bib
Does Visual Grounding Enhance the Understanding of Embodied Knowledge in Large Language Models?
Zhihui Yang | Yupei Wang | Kaijie Mo | Zhe Zhao | Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025

Despite significant progress in multimodal language models (LMs), it remains unclear whether visual grounding enhances their understanding of embodied knowledge compared to text-only models. To address this question, we propose a novel embodied knowledge understanding benchmark based on the perceptual theory from psychology, encompassing visual, auditory, tactile, gustatory, olfactory external senses, and interoception. The benchmark assesses the models’ perceptual abilities across different sensory modalities through vector comparison and question-answering tasks with over 1,700 questions. By comparing 30 state-of-the-art LMs, we surprisingly find that vision-language models (VLMs) do not outperform text-only models in either task. Moreover, the models perform significantly worse in the visual dimension compared to other sensory dimensions. Further analysis reveals that the vector representations are easily influenced by word form and frequency, and the models struggle to answer questions involving spatial perception and reasoning. Our findings underscore the need for more effective integration of embodied knowledge in LMs to enhance their understanding of the physical world.

pdf bib
CMT-Eval: A Novel Chinese Multi-turn Dialogue Evaluation Dataset Addressing Real-world Conversational Challenges
Siyu Tian | Kaijie Mo | Yupei Wang | Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2025

Multi-turn dialogue is a key paradigm for interaction between users and Large Language Models (LLMs). However, existing evaluation benchmarks fail to capture users’ evolving needs and how their diverse conversation styles affect the dialogue flow. To address these limitations, we propose CMT-Eval, the first dedicated dataset for fine-grained evaluation of Chinese multi-turn dialogue systems. Built upon a linguistic theory-driven Speech Act Framework, diverse user personas, and varied conversational challenges, CMT-Eval comprises 596 high-quality dialogues with 4,431 turns, simulating realistic, multifaceted, and challenging conversations. Experiments reveal that models struggle with specific speech acts, user personas, and complex scenarios, highlighting the effectiveness of CMT-Eval in assessing LLMs’ multi-turn dialogue capabilities and providing valuable insights for their enhancement. The dataset, code, and prompts are available at https://github.com/hejaida/CMT-Eval.

2024

pdf bib
ExpertEase: A Multi-Agent Framework for Grade-Specific Document Simplification with Large Language Models
Kaijie Mo | Renfen Hu
Findings of the Association for Computational Linguistics: EMNLP 2024

Text simplification is crucial for making texts more accessible, yet current research primarily focuses on sentence-level simplification, neglecting document-level simplification and the different reading levels of target audiences. To bridge these gaps, we introduce ExpertEase, a multi-agent framework for grade-specific document simplification using Large Language Models (LLMs). ExpertEase simulates real-world text simplification by introducing expert, teacher, and student agents that cooperate on the task and rely on external tools for calibration. Experiments demonstrate that this multi-agent approach significantly enhances LLMs’ ability to simplify reading materials for diverse audiences. Furthermore, we evaluate the performance of LLMs varying in size and type, and compare LLM-generated texts with human-authored ones, highlighting their potential in educational resource development and guiding future research.