Jiaming Zhang
2025
Surge: On the Potential of Large Language Models as General-Purpose Surrogate Code Executors
Bohan Lyu
|
Siqiao Huang
|
Zichen Liang
|
Qian Sun
|
Jiaming Zhang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Neural surrogate models are powerful and efficient tools in data mining. Meanwhile, large language models (LLMs) have demonstrated remarkable capabilities in code-related tasks, such as generation and understanding. However, an equally important yet underexplored question is whether LLMs can serve as surrogate models for code execution prediction. To systematically investigate it, we introduce SURGE, a comprehensive benchmark with 1160 problems covering 8 key aspects: multi-language programming tasks, competition-level programming problems, repository-level code analysis, high-cost scientific computing, time-complexity-intensive algorithms, buggy code analysis, programs dependent on specific compilers or execution environments, and formal mathematical proof verification. Through extensive analysis of 21 open-source and proprietary LLMs, we examine scaling laws, data efficiency, and predictive accuracy. Our findings reveal important insights about the feasibility of LLMs as efficient surrogates for computational processes. The benchmark and evaluation framework are available at https://github.com/Imbernoulli/SURGE.
Investigating and Enhancing Vision-Audio Capability in Omnimodal Large Language Models
Rui Hu
|
Delai Qiu
|
Shuyu Wei
|
Jiaming Zhang
|
Yining Wang
|
Shengping Liu
|
Jitao Sang
Findings of the Association for Computational Linguistics: ACL 2025
Omnimodal Large Language Models (OLLMs) have shown significant progress in integrating vision and text, but still struggle with integrating vision and audio, often exhibiting suboptimal performance when processing audio queries compared to text queries. This disparity is primarily due to insufficient alignment between vision and audio modalities during training, leading to inadequate attention to visual information when using audio queries. To mitigate this issue, we propose a Self-Knowledge Distillation (Self-KD) training method where the vision-text component of the OLLM serves as the teacher and the vision-audio component as the student. This enables the model to process audio in a manner analogous to its text processing. Our experimental results demonstrate that Self-KD is an effective method for enhancing the vision-audio capabilities of OLLMs by learning from the vision-text components, which subsequently improves the interaction between audio and images and results in improved performance on multimodal tasks.
Search
Fix author
Co-authors
- Rui Hu 1
- Siqiao Huang 1
- Zichen Liang 1
- Shengping Liu 1
- Bohan Lyu 1
- show all...