Cong Duy Vu Hoang


2025

pdf bib
Mastering the Craft of Data Synthesis for CodeLLMs
Meng Chen | Philip Arthur | Qianyu Feng | Cong Duy Vu Hoang | Yu-Heng Hong | Mahdi Kazemi Moghaddam | Omid Nezami | Duc Thien Nguyen | Gioacchino Tangari | Duy Vu | Thanh Vu | Mark Johnson | Krishnaram Kenthapadi | Don Dharmasiri | Long Duong | Yuan-Fang Li
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

Large language models (LLMs) have shown impressive performance in code understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.

pdf bib
Distill-C: Enhanced NL2SQL via Distilled Customization with LLMs
Cong Duy Vu Hoang | Gioacchino Tangari | Clemence Lanfranchi | Dalu Guo | Paul Cayet | Steve Siu | Don Dharmasiri | Yuan-Fang Li | Long Duong | Damien Hilloulin | Rhicheek Patra | Sungpack Hong | Hassan Chafi
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)

The growing adoption of large language models (LLMs) in business applications has amplified interest in Natural Language to SQL (NL2SQL) solutions, in which there is competing demand for high performance and efficiency. Domain- and customer-specific requirements further complicate the problem. To address this conundrum, we introduce Distill-C, a distilled customization framework tailored for NL2SQL tasks. Distill-C utilizes large teacher LLMs to produce high-quality synthetic data through a robust and scalable pipeline. Finetuning smaller and open-source LLMs on this synthesized data enables them to rival or outperform teacher models an order of magnitude larger. Evaluated on multiple challenging benchmarks, Distill-C achieves an average improvement of 36% in execution accuracy compared to the base models from three distinct LLM families. Additionally, on three internal customer benchmarks, Distill-C demonstrates a 22.6% performance improvement over the base models. Our results demonstrate that Distill-C is an effective, high-performing and generalizable approach for deploying lightweight yet powerful NL2SQL models, delivering exceptional accuracies while maintaining low computational cost.

pdf bib
SQLong: Enhanced NL2SQL for Longer Contexts with LLMs
Dai Quoc Nguyen | Cong Duy Vu Hoang | Duy Quang Vu | Gioacchino Tangari | Thanh Vu | Don Dharmasiri | Yuan-Fang Li | Long Duong
Proceedings of the 4th Table Representation Learning Workshop

Open-weight large language models (LLMs) have significantly advanced performance in the Natural Language to SQL (NL2SQL) task. However, their effectiveness diminishes when dealing with large database schemas, as the context length increases. To address this limitation, we present SQLong, a novel and efficient data augmentation framework designed to enhance LLM performance in long-context scenarios for the NL2SQL task. SQLong generates augmented datasets by extending existing database schemas with additional synthetic CREATE TABLE commands and corresponding data rows, sampled from diverse schemas in the training data. This approach effectively simulates long-context scenarios during finetuning and evaluation. Through experiments on the Spider and BIRD datasets, we demonstrate that LLMs finetuned with SQLong-augmented data significantly outperform those trained on standard datasets. These imply SQLong’s practical implementation and its impact on improving NL2SQL capabilities in real-world settings with complex database schemas.

pdf bib
SQLong: Enhanced NL2SQL for Longer Contexts with LLMs
Dai Quoc Nguyen | Cong Duy Vu Hoang | Duy Quang Vu | Gioacchino Tangari | Thanh Vu | Don Dharmasiri | Yuan-Fang Li | Long Duong
Proceedings of the 4th Table Representation Learning Workshop

Open-weight large language models (LLMs) have significantly advanced performance in the Natural Language to SQL (NL2SQL) task. However, their effectiveness diminishes when dealing with large database schemas, as the context length increases. To address this limitation, we present SQLong, a novel and efficient data augmentation framework designed to enhance LLM performance in long-context scenarios for the NL2SQL task. SQLong generates augmented datasets by extending existing database schemas with additional synthetic CREATE TABLE commands and corresponding data rows, sampled from diverse schemas in the training data. This approach effectively simulates long-context scenarios during finetuning and evaluation. Through experiments on the Spider and BIRD datasets, we demonstrate that LLMs finetuned with SQLong-augmented data significantly outperform those trained on standard datasets. These imply SQLong’s practical implementation and its impact on improving NL2SQL capabilities in real-world settings with complex database schemas.

2018

pdf bib
Improved Neural Machine Translation using Side Information
Cong Duy Vu Hoang | Gholamreza Haffari | Trevor Cohn
Proceedings of the Australasian Language Technology Association Workshop 2018

In this work, we investigate whether side information is helpful in neural machine translation (NMT). We study various kinds of side information, including topical information, personal trait, then propose different ways of incorporating them into the existing NMT models. Our experimental results show the benefits of side information in improving the NMT models.

2017

pdf bib
Towards Decoding as Continuous Optimisation in Neural Machine Translation
Cong Duy Vu Hoang | Gholamreza Haffari | Trevor Cohn
Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing

We propose a novel decoding approach for neural machine translation (NMT) based on continuous optimisation. We reformulate decoding, a discrete optimization problem, into a continuous problem, such that optimization can make use of efficient gradient-based techniques. Our powerful decoding framework allows for more accurate decoding for standard neural machine translation models, as well as enabling decoding in intractable models such as intersection of several different NMT models. Our empirical results show that our decoding framework is effective, and can leads to substantial improvements in translations, especially in situations where greedy search and beam search are not feasible. Finally, we show how the technique is highly competitive with, and complementary to, reranking.

2016

pdf bib
Incorporating Structural Alignment Biases into an Attentional Neural Translation Model
Trevor Cohn | Cong Duy Vu Hoang | Ekaterina Vymolova | Kaisheng Yao | Chris Dyer | Gholamreza Haffari
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Incorporating Side Information into Recurrent Neural Network Language Models
Cong Duy Vu Hoang | Trevor Cohn | Gholamreza Haffari
Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

pdf bib
Improving Neural Translation Models with Linguistic Factors
Cong Duy Vu Hoang | Gholamreza Haffari | Trevor Cohn
Proceedings of the Australasian Language Technology Association Workshop 2016

2014

pdf bib
A Rule-Augmented Statistical Phrase-based Translation System
Cong Duy Vu Hoang | AiTi Aw | Nhung T. H. Nguyen
Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations

2012

pdf bib
An Unsupervised and Data-Driven Approach for Spell Checking in Vietnamese OCR-scanned Texts
Cong Duy Vu Hoang | Ai Ti Aw
Proceedings of the Workshop on Innovative Hybrid Approaches to the Processing of Textual Data

2010

pdf bib
Towards Automated Related Work Summarization
Cong Duy Vu Hoang | Min-Yen Kan
Coling 2010: Posters