Sujan Reddy A

Also published as: Sujan Reddy A


2022

pdf
Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks
Yizhong Wang | Swaroop Mishra | Pegah Alipoormolabashi | Yeganeh Kordi | Amirreza Mirzaei | Atharva Naik | Arjun Ashok | Arut Selvan Dhanasekaran | Anjana Arunkumar | David Stap | Eshaan Pathak | Giannis Karamanolakis | Haizhi Lai | Ishan Purohit | Ishani Mondal | Jacob Anderson | Kirby Kuznia | Krima Doshi | Kuntal Kumar Pal | Maitreya Patel | Mehrad Moradshahi | Mihir Parmar | Mirali Purohit | Neeraj Varshney | Phani Rohitha Kaza | Pulkit Verma | Ravsehaj Singh Puri | Rushang Karia | Savan Doshi | Shailaja Keyur Sampat | Siddhartha Mishra | Sujan Reddy A | Sumanta Patro | Tanay Dixit | Xudong Shen
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing

How well can NLP models generalize to a variety of unseen tasks when provided with task instructions? To address this question, we first introduce Super-NaturalInstructions, a benchmark of 1,616 diverse NLP tasks and their expert-written instructions. Our collection covers 76 distinct task types, including but not limited to classification, extraction, infilling, sequence tagging, text rewriting, and text composition. This large and diverse collection of tasks enables rigorous benchmarking of cross-task generalization under instructions—training models to follow instructions on a subset of tasks and evaluating them on the remaining unseen ones.Furthermore, we build Tk-Instruct, a transformer model trained to follow a variety of in-context instructions (plain language task definitions or k-shot examples). Our experiments show that Tk-Instruct outperforms existing instruction-following models such as InstructGPT by over 9% on our benchmark despite being an order of magnitude smaller. We further analyze generalization as a function of various scaling parameters, such as the number of observed tasks, the number of instances per task, and model sizes. We hope our dataset and model facilitate future progress towards more general-purpose NLP models.

pdf
Automating Human Evaluation of Dialogue Systems
Sujan Reddy A
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Student Research Workshop

Automated metrics to evaluate dialogue systems like BLEU, METEOR, etc., weakly correlate with human judgments. Thus, human evaluation is often used to supplement these metrics for system evaluation. However, human evaluation is time-consuming as well as expensive. This paper provides an alternative approach to human evaluation with respect to three aspects: naturalness, informativeness, and quality in dialogue systems. I propose an approach based on fine-tuning the BERT model with three prediction heads, to predict whether the system-generated output is natural, fluent, and informative. I observe that the proposed model achieves an average accuracy of around 77% over these 3 labels. I also design a baseline approach that uses three different BERT models to make the predictions. Based on experimental analysis, I find that using a shared model to compute the three labels performs better than three separate models.