Xudong Lu
2025
CodeV: Issue Resolving with Visual Data
Linhao Zhang
|
Daoguang Zan
|
Quanshun Yang
|
Zhirong Huang
|
Dong Chen
|
Bo Shen
|
Tianyu Liu
|
Yongshun Gong
|
Huang Pengjie
|
Xudong Lu
|
Guangtai Liang
|
Lizhen Cui
|
Qianxiang Wang
Findings of the Association for Computational Linguistics: ACL 2025
Large Language Models (LLMs) have advanced rapidly in recent years, with their applications in software engineering expanding to more complex repository-level tasks. GitHub issue resolving is a key challenge among these tasks. While recent approaches have made progress on this task, they focus on textual data within issues, neglecting visual data. However, this visual data is crucial for resolving issues as it conveys additional knowledge that text alone cannot. We propose CodeV, the first approach to leveraging visual data to enhance the issue-resolving capabilities of LLMs. CodeV resolves each issue by following a two-phase process: data processing and patch generation. To evaluate CodeV, we construct a benchmark for visual issue resolving, namely Visual SWE-bench. Through extensive experiments, we demonstrate the effectiveness of CodeV, as well as provide valuable insights into leveraging visual data to resolve GitHub issues.
2024
Not All Experts are Equal: Efficient Expert Pruning and Skipping for Mixture-of-Experts Large Language Models
Xudong Lu
|
Qi Liu
|
Yuhui Xu
|
Aojun Zhou
|
Siyuan Huang
|
Bo Zhang
|
Junchi Yan
|
Hongsheng Li
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
A pivotal advancement in the progress of large language models (LLMs) is the emergence of the Mixture-of-Experts (MoE) LLMs. Compared to traditional LLMs, MoE LLMs can achieve higher performance with fewer active parameters, but it is still hard to deploy them due to their immense parameter sizes. Different from previous weight pruning methods that rely on specifically designed hardware, this paper mainly aims to enhance the deployment efficiency of MoE LLMs by introducing plug-and-play expert-level sparsification techniques. Specifically, we propose, for the first time to our best knowledge, post-training approaches for task-agnostic and task-specific expert pruning and skipping of MoE LLMs, tailored to improve deployment efficiency while maintaining model performance across a wide range of tasks. Extensive experiments show that our proposed methods can simultaneously reduce model sizes and increase the inference speed, while maintaining satisfactory performance. Code will be made available at https://github.com/Lucky-Lance/Expert_Sparsity.
Search
Fix author
Co-authors
- Dong Chen 1
- Lizhen Cui 1
- Yongshun Gong 1
- Siyuan Huang 1
- Zhirong Huang 1
- show all...