@inproceedings{chang-etal-2021-selectgen,
    title = "The {S}elect{G}en Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation",
    author = "Chang, Ernie  and
      Shen, Xiaoyu  and
      Marin, Alex  and
      Demberg, Vera",
    editor = "Belz, Anya  and
      Fan, Angela  and
      Reiter, Ehud  and
      Sripada, Yaji",
    booktitle = "Proceedings of the 14th International Conference on Natural Language Generation",
    month = aug,
    year = "2021",
    address = "Aberdeen, Scotland, UK",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2021.inlg-1.36/",
    doi = "10.18653/v1/2021.inlg-1.36",
    pages = "325--330",
    abstract = "We propose a shared task on training instance selection for few-shot neural text generation. Large-scale pretrained language models have led to dramatic improvements in few-shot text generation. Nonetheless, almost all previous work simply applies random sampling to select the few-shot training instances. Little to no attention has been paid to the selection strategies and how they would affect model performance. Studying the selection strategy can help us (1) make the most use of our annotation budget in downstream tasks and (2) better benchmark few-shot text generative models. We welcome submissions that present their selection strategies and the effects on the generation quality."
}Markdown (Informal)
[The SelectGen Challenge: Finding the Best Training Samples for Few-Shot Neural Text Generation](https://preview.aclanthology.org/ingest-emnlp/2021.inlg-1.36/) (Chang et al., INLG 2021)
ACL