Lee Callender
2021
Getting to Production with Few-shot Natural Language Generation Models
Peyman Heidari
|
Arash Einolghozati
|
Shashank Jain
|
Soumya Batra
|
Lee Callender
|
Ankit Arun
|
Shawn Mei
|
Sonal Gupta
|
Pinar Donmez
|
Vikas Bhardwaj
|
Anuj Kumar
|
Michael White
Proceedings of the 22nd Annual Meeting of the Special Interest Group on Discourse and Dialogue
In this paper, we study the utilization of pre-trained language models to enable few-shotNatural Language Generation (NLG) in task-oriented dialog systems. We introduce a system consisting of iterative self-training and an extensible mini-template framework that textualizes the structured input data into semi-natural text to fully take advantage of pre-trained language models. We compare var-ious representations of NLG models’ input and output and show that transforming the input and output to be similar to what the language model has seen before during pre-training improves the model’s few-shot performance substantially. We show that neural mod-els can be trained with as few as 300 annotated examples while providing high fidelity, considerably lowering the resource requirements for standing up a new domain or language. This level of data efficiency removes the need for crowd-sourced data collection resulting in higher quality data annotated by expert linguists. In addition, model maintenance and debugging processes will improve in this few-shot setting. Finally, we explore distillation and using a caching system to satisfy latency requirements of real-world systems.
Search
Co-authors
- Peyman Heidari 1
- Arash Einolghozati 1
- Shashank Jain 1
- Soumya Batra 1
- Ankit Arun 1
- show all...