2025
pdf
bib
abs
Concept Distillation from Strong to Weak Models via Hypotheses-to-Theories Prompting
Emmanuel Aboah Boateng
|
Cassiano O Becker
|
Nabiha Asghar
|
Kabir Walia
|
Ashwin Srinivasan
|
Ehi Nosakhare
|
Soundararajan Srinivasan
|
Victor Dibia
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 3: Industry Track)
Hand-crafting high quality prompts to optimize the performance of language models is a complicated and labor-intensive process. Furthermore, when migrating to newer, smaller, or weaker models (possibly due to latency or cost gains), prompts need to be updated to re-optimize the task performance. We propose Concept Distillation (CD), an automatic prompt optimization technique for enhancing weaker models on complex tasks. CD involves: (1) collecting mistakes made by weak models with a base prompt (initialization), (2) using a strong model to generate reasons for these mistakes and create rules/concepts for weak models (induction), and (3) filtering these rules based on validation set performance and integrating them into the base prompt (deduction/verification). We evaluated CD on NL2Code and mathematical reasoning tasks, observing significant performance boosts for small and weaker language models. Notably, Mistral-7B’s accuracy on Multi-Arith increased by 20%, and Phi-3-mini-3.8B’s accuracy on HumanEval rose by 34%. Compared to other automated methods, CD offers an effective, cost-efficient strategy for improving weak models’ performance on complex tasks and enables seamless workload migration across different language models without compromising performance.
2022
pdf
bib
abs
SLATE: A Sequence Labeling Approach for Task Extraction from Free-form Inked Content
Apurva Gandhi
|
Ryan Serrao
|
Biyi Fang
|
Gilbert Antonius
|
Jenna Hong
|
Tra My Nguyen
|
Sheng Yi
|
Ehi Nosakhare
|
Irene Shaffer
|
Soundararajan Srinivasan
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing: Industry Track
We present SLATE, a sequence labeling approach for extracting tasks from free-form content such as digitally handwritten (or “inked”) notes on a virtual whiteboard. Our approach allows us to create a single, low-latency model to simultaneously perform sentence segmentation and classification of these sentences into task/non-task sentences. SLATE greatly outperforms a baseline two-model (sentence segmentation followed by classification model) approach, achieving a task F1 score of 84.4%, a sentence segmentation (boundary similarity) score of 88.4% and three times lower latency compared to the baseline. Furthermore, we provide insights into tackling challenges of performing NLP on the inking domain. We release both our code and dataset for this novel task.
pdf
bib
abs
Strategies to Improve Few-shot Learning for Intent Classification and Slot-Filling
Samyadeep Basu
|
Amr Sharaf
|
Karine Ip Kiun Chong
|
Alex Fischer
|
Vishal Rohra
|
Michael Amoake
|
Hazem El-Hammamy
|
Ehi Nosakhare
|
Vijay Ramani
|
Benjamin Han
Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI)
Intent classification (IC) and slot filling (SF) are two fundamental tasks in modern Natural Language Understanding (NLU) systems. Collecting and annotating large amounts of data to train deep learning models for such systems are not scalable. This problem can be addressed by learning from few examples using fast supervised meta-learning techniques such as prototypical networks. In this work, we systematically investigate how contrastive learning and data augmentation methods can benefit these existing meta-learning pipelines for jointly modelled IC/SF tasks. Through extensive experiments across standard IC/SF benchmarks (SNIPS and ATIS), we show that our proposed approaches outperform standard meta-learning methods: contrastive losses as a regularizer in conjunction with prototypical networks consistently outperform the existing state-of-the-art for both IC and SF tasks, while data augmentation strategies primarily improve few-shot IC by a significant margin