Don Dharmasiri


2025

pdf bib
Mastering the Craft of Data Synthesis for CodeLLMs
Meng Chen | Philip Arthur | Qianyu Feng | Cong Duy Vu Hoang | Yu-Heng Hong | Mahdi Kazemi Moghaddam | Omid Nezami | Duc Thien Nguyen | Gioacchino Tangari | Duy Vu | Thanh Vu | Mark Johnson | Krishnaram Kenthapadi | Don Dharmasiri | Long Duong | Yuan-Fang Li
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Large language models (LLMs) have shown impressive performance in code understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.

pdf bib
Distill-C: Enhanced NL2SQL via Distilled Customization with LLMs
Cong Duy Vu Hoang | Gioacchino Tangari | Clemence Lanfranchi | Dalu Guo | Paul Cayet | Steve Siu | Don Dharmasiri | Yuan-Fang Li | Long Duong | Damien Hilloulin | Rhicheek Patra | Sungpack Hong | Hassan Chafi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

The growing adoption of large language models (LLMs) in business applications has amplified interest in Natural Language to SQL (NL2SQL) solutions, in which there is competing demand for high performance and efficiency. Domain- and customer-specific requirements further complicate the problem. To address this conundrum, we introduce Distill-C, a distilled customization framework tailored for NL2SQL tasks. Distill-C utilizes large teacher LLMs to produce high-quality synthetic data through a robust and scalable pipeline. Finetuning smaller and open-source LLMs on this synthesized data enables them to rival or outperform teacher models an order of magnitude larger. Evaluated on multiple challenging benchmarks, Distill-C achieves an average improvement of 36% in execution accuracy compared to the base models from three distinct LLM families. Additionally, on three internal customer benchmarks, Distill-C demonstrates a 22.6% performance improvement over the base models. Our results demonstrate that Distill-C is an effective, high-performing and generalizable approach for deploying lightweight yet powerful NL2SQL models, delivering exceptional accuracies while maintaining low computational cost.