Siming Chen


2025

pdf bib
AI-Press: A Multi-Agent News Generating and Feedback Simulation System Powered by Large Language Models
Xiawei Liu | Shiyue Yang | Xinnong Zhang | Haoyu Kuang | Libo Sun | Yihang Yang | Siming Chen | Xuanjing Huang | Zhongyu Wei
Proceedings of the 31st International Conference on Computational Linguistics: System Demonstrations

We introduce AI-Press, an automated news drafting and polishing system based on multi-agent collaboration and Retrieval-Augmented Generation. We develop a feedback simulation system that generates public responses considering demographic distributions. Demo link: https://youtu.be/TmjfJrbzaRU

pdf bib
MMSciBench: Benchmarking Language Models on Chinese Multimodal Scientific Problems
Xinwu Ye | Chengfan Li | Siming Chen | Wei Wei | Robert Tang
Findings of the Association for Computational Linguistics: ACL 2025

Recent advances in large language models (LLMs) and vision-language models (LVLMs) have shown promise across many tasks, yet their scientific reasoning capabilities remain untested, particularly in multimodal settings. We present MMSciBench, a benchmark for evaluating mathematical and physical reasoning through text-only and text-image formats, with human-annotated difficulty levels, solutions with detailed explanations, and taxonomic mappings. Evaluation of state-of-the-art models reveals significant limitations, with even the best model achieving only 63.77% accuracy and particularly struggling with visual reasoning tasks. Our analysis exposes critical gaps in complex reasoning and visual-textual integration, establishing MMSciBench as a rigorous standard for measuring progress in multimodal scientific understanding. The code for MMSciBench is open-sourced at GitHub, and the dataset is available at Hugging Face.

2024

pdf bib
SoMeLVLM: A Large Vision Language Model for Social Media Processing
Xinnong Zhang | Haoyu Kuang | Xinyi Mou | Hanjia Lyu | Kun Wu | Siming Chen | Jiebo Luo | Xuanjing Huang | Zhongyu Wei
Findings of the Association for Computational Linguistics: ACL 2024

The growth of social media, characterized by its multimodal nature, has led to the emergence of diverse phenomena and challenges, which calls for an effective approach to uniformly solve automated tasks. The powerful Large Vision Language Models make it possible to handle a variety of tasks simultaneously, but even with carefully designed prompting methods, the general domain models often fall short in aligning with the unique speaking style and context of social media tasks. In this paper, we introduce a Large Vision Language Model for Social Media Processing (SoMeLVLM), which is a cognitive framework equipped with five key capabilities including knowledge & comprehension, application, analysis, evaluation, and creation. SoMeLVLM is designed to understand and generate realistic social media behavior. We have developed a 654k multimodal social media instruction-tuning dataset to support our cognitive framework and fine-tune our model. Our experiments demonstrate that SoMeLVLM achieves state-of-the-art performance in multiple social media tasks. Further analysis shows its significant advantages over baselines in terms of cognitive abilities.