2025
pdf
bib
abs
Mastering the Craft of Data Synthesis for CodeLLMs
Meng Chen
|
Philip Arthur
|
Qianyu Feng
|
Cong Duy Vu Hoang
|
Yu-Heng Hong
|
Mahdi Kazemi Moghaddam
|
Omid Nezami
|
Duc Thien Nguyen
|
Gioacchino Tangari
|
Duy Vu
|
Thanh Vu
|
Mark Johnson
|
Krishnaram Kenthapadi
|
Don Dharmasiri
|
Long Duong
|
Yuan-Fang Li
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large language models (LLMs) have shown impressive performance in code understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.
pdf
bib
abs
Distill-C: Enhanced NL2SQL via Distilled Customization with LLMs
Cong Duy Vu Hoang
|
Gioacchino Tangari
|
Clemence Lanfranchi
|
Dalu Guo
|
Paul Cayet
|
Steve Siu
|
Don Dharmasiri
|
Yuan-Fang Li
|
Long Duong
|
Damien Hilloulin
|
Rhicheek Patra
|
Sungpack Hong
|
Hassan Chafi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
The growing adoption of large language models (LLMs) in business applications has amplified interest in Natural Language to SQL (NL2SQL) solutions, in which there is competing demand for high performance and efficiency. Domain- and customer-specific requirements further complicate the problem. To address this conundrum, we introduce Distill-C, a distilled customization framework tailored for NL2SQL tasks. Distill-C utilizes large teacher LLMs to produce high-quality synthetic data through a robust and scalable pipeline. Finetuning smaller and open-source LLMs on this synthesized data enables them to rival or outperform teacher models an order of magnitude larger. Evaluated on multiple challenging benchmarks, Distill-C achieves an average improvement of 36% in execution accuracy compared to the base models from three distinct LLM families. Additionally, on three internal customer benchmarks, Distill-C demonstrates a 22.6% performance improvement over the base models. Our results demonstrate that Distill-C is an effective, high-performing and generalizable approach for deploying lightweight yet powerful NL2SQL models, delivering exceptional accuracies while maintaining low computational cost.
pdf
bib
abs
SQLong: Enhanced NL2SQL for Longer Contexts with LLMs
Dai Quoc Nguyen
|
Cong Duy Vu Hoang
|
Duy Quang Vu
|
Gioacchino Tangari
|
Thanh Vu
|
Don Dharmasiri
|
Yuan-Fang Li
|
Long Duong
Proceedings of the 4th Table Representation Learning Workshop
Open-weight large language models (LLMs) have significantly advanced performance in the Natural Language to SQL (NL2SQL) task. However, their effectiveness diminishes when dealing with large database schemas, as the context length increases. To address this limitation, we present SQLong, a novel and efficient data augmentation framework designed to enhance LLM performance in long-context scenarios for the NL2SQL task. SQLong generates augmented datasets by extending existing database schemas with additional synthetic CREATE TABLE commands and corresponding data rows, sampled from diverse schemas in the training data. This approach effectively simulates long-context scenarios during finetuning and evaluation. Through experiments on the Spider and BIRD datasets, we demonstrate that LLMs finetuned with SQLong-augmented data significantly outperform those trained on standard datasets. These imply SQLong’s practical implementation and its impact on improving NL2SQL capabilities in real-world settings with complex database schemas.