An Empirical Study on Strong-Weak Model Collaboration for Repo-level Code Generation

Shubham Gandhi, Atharva Naik, Yiqing Xie, Carolyn Rose


Abstract
We study cost-efficient collaboration between strong and weak language models for repository-level code generation, where the weak model handles simpler tasks at lower cost, and the most challenging tasks are delegated to the strong model. While many works propose architectures for this task, few analyze performance relative to cost. We evaluate a broad spectrum of collaboration strategies: context-based, pipeline-based, and dynamic, on GitHub issue resolution. Our most effective collaborative strategy achieves equivalent performance to the strong model while reducing the cost by 40%. Based on our findings, we offer actionable guidelines for choosing collaboration strategies under varying budget and performance constraints. Our results show that strong–weak collaboration substantially boosts the weak model’s performance at a fraction of the cost, pipeline and context-based methods being most efficient.
Anthology ID:
2025.emnlp-main.1043
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20678–20697
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1043/
DOI:
Bibkey:
Cite (ACL):
Shubham Gandhi, Atharva Naik, Yiqing Xie, and Carolyn Rose. 2025. An Empirical Study on Strong-Weak Model Collaboration for Repo-level Code Generation. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20678–20697, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
An Empirical Study on Strong-Weak Model Collaboration for Repo-level Code Generation (Gandhi et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1043.pdf
Checklist:
 2025.emnlp-main.1043.checklist.pdf