Jason Klein Liu


2025

pdf bib
OpenCoder: The Open Cookbook for Top-Tier Code Large Language Models
Siming Huang | Tianhao Cheng | Jason Klein Liu | Weidi Xu | Jiaran Hao | Liuyihan Song | Yang Xu | Jian Yang | Jiaheng Liu | Chenchen Zhang | Linzheng Chai | Ruifeng Yuan | Xianzhen Luo | Qiufeng Wang | YuanTao Fan | Qingfu Zhu | Zhaoxiang Zhang | Yang Gao | Jie Fu | Qian Liu | Houyi Li | Ge Zhang | Yuan Qi | Xu Yinghui | Wei Chu | Zili Wang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Code LLMs have been widely used in various domains, including code generation, logical reasoning, and agent systems. However, open-access code LLMs mostly only release weights, lacking key features such as reproducible data pipelines and transparent training protocols, which are crucial for advancing deeper, more reliable investigations. To address the gap, we introduce OpenCoder, a top-tier code LLM that not only achieves performance comparable to leading models but also serves as an “open cookbook” for the research community. Unlike most prior efforts, we release not only model weights and inference code, but also the reproducible training data, complete data processing pipeline, rigorous experimental ablation results, and detailed training protocols for open scientific research. Our work identifies the key ingredients for building a top-tier code LLM: optimized heuristic rules for data cleaning and deduplication, effective recall of code-related text corpus, and high-quality synthetic data for both annealing and supervised fine-tuning stages. By offering this level of openness, we aim to broaden access to all aspects of a top-tier code LLM, with OpenCoder serving as both a powerful model and an open foundation to accelerate research and enable reproducible advancements in code intelligence. The released resource is available at https://opencoder-llm.github.io.