Ashutosh Kumar
2025
Generative or Discriminative? Revisiting Text Classification in the Era of Transformers
Siva Rajesh Kasa | Karan Gupta | Sumegh Roychowdhury | Ashutosh Kumar | Yaswanth Biruduraju | Santhosh Kumar Kasa | Pattisapu Nikhil Priyatam | Arindam Bhattacharya | Shailendra Agarwal | Vijay Huddar
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Siva Rajesh Kasa | Karan Gupta | Sumegh Roychowdhury | Ashutosh Kumar | Yaswanth Biruduraju | Santhosh Kumar Kasa | Pattisapu Nikhil Priyatam | Arindam Bhattacharya | Shailendra Agarwal | Vijay Huddar
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
*The comparison between discriminative and generative classifiers has intrigued researchers since [Efron (1975)’s](https://www.jstor.org/stable/2285453) seminal analysis of logistic regression versus discriminant analysis. While early theoretical work established that generative classifiers exhibit lower sample complexity but higher asymptotic error in simple linear settings, these trade-offs remain unexplored in the transformer era. We present the first comprehensive evaluation of modern generative and discriminative architectures—Auto-regressive, Masked Language Modeling, Discrete Diffusion, and Encoders for text classification. Our study reveals that the classical “two regimes” phenomenon manifests distinctly across different architectures and training paradigms. Beyond accuracy, we analyze sample efficiency, calibration, noise robustness, and ordinality across diverse scenarios. Our findings offer practical guidance for selecting the most suitable modeling approach based on real-world constraints such as latency and data limitations.*
2023
NL-Augmenter: A Framework for Task-Sensitive Natural Language Augmentation
Kaustubh Dhole | Varun Gangal | Sebastian Gehrmann | Aadesh Gupta | Zhenhao Li | Saad Mahamood | Abinaya Mahadiran | Simon Mille | Ashish Shrivastava | Samson Tan | Tongshang Wu | Jascha Sohl-Dickstein | Jinho Choi | Eduard Hovy | Ondřej Dušek | Sebastian Ruder | Sajant Anand | Nagender Aneja | Rabin Banjade | Lisa Barthe | Hanna Behnke | Ian Berlot-Attwell | Connor Boyle | Caroline Brun | Marco Antonio Sobrevilla Cabezudo | Samuel Cahyawijaya | Emile Chapuis | Wanxiang Che | Mukund Choudhary | Christian Clauss | Pierre Colombo | Filip Cornell | Gautier Dagan | Mayukh Das | Tanay Dixit | Thomas Dopierre | Paul-Alexis Dray | Suchitra Dubey | Tatiana Ekeinhor | Marco Di Giovanni | Tanya Goyal | Rishabh Gupta | Louanes Hamla | Sang Han | Fabrice Harel-Canada | Antoine Honoré | Ishan Jindal | Przemysław Joniak | Denis Kleyko | Venelin Kovatchev | Kalpesh Krishna | Ashutosh Kumar | Stefan Langer | Seungjae Ryan Lee | Corey James Levinson | Hualou Liang | Kaizhao Liang | Zhexiong Liu | Andrey Lukyanenko | Vukosi Marivate | Gerard de Melo | Simon Meoni | Maxine Meyer | Afnan Mir | Nafise Sadat Moosavi | Niklas Meunnighoff | Timothy Sum Hon Mun | Kenton Murray | Marcin Namysl | Maria Obedkova | Priti Oli | Nivranshu Pasricha | Jan Pfister | Richard Plant | Vinay Prabhu | Vasile Pais | Libo Qin | Shahab Raji | Pawan Kumar Rajpoot | Vikas Raunak | Roy Rinberg | Nicholas Roberts | Juan Diego Rodriguez | Claude Roux | Vasconcellos Samus | Ananya Sai | Robin Schmidt | Thomas Scialom | Tshephisho Sefara | Saqib Shamsi | Xudong Shen | Yiwen Shi | Haoyue Shi | Anna Shvets | Nick Siegel | Damien Sileo | Jamie Simon | Chandan Singh | Roman Sitelew | Priyank Soni | Taylor Sorensen | William Soto | Aman Srivastava | Aditya Srivatsa | Tony Sun | Mukund Varma | A Tabassum | Fiona Tan | Ryan Teehan | Mo Tiwari | Marie Tolkiehn | Athena Wang | Zijian Wang | Zijie Wang | Gloria Wang | Fuxuan Wei | Bryan Wilie | Genta Indra Winata | Xinyu Wu | Witold Wydmanski | Tianbao Xie | Usama Yaseen | Michael Yee | Jing Zhang | Yue Zhang
Northern European Journal of Language Technology, Volume 9
Kaustubh Dhole | Varun Gangal | Sebastian Gehrmann | Aadesh Gupta | Zhenhao Li | Saad Mahamood | Abinaya Mahadiran | Simon Mille | Ashish Shrivastava | Samson Tan | Tongshang Wu | Jascha Sohl-Dickstein | Jinho Choi | Eduard Hovy | Ondřej Dušek | Sebastian Ruder | Sajant Anand | Nagender Aneja | Rabin Banjade | Lisa Barthe | Hanna Behnke | Ian Berlot-Attwell | Connor Boyle | Caroline Brun | Marco Antonio Sobrevilla Cabezudo | Samuel Cahyawijaya | Emile Chapuis | Wanxiang Che | Mukund Choudhary | Christian Clauss | Pierre Colombo | Filip Cornell | Gautier Dagan | Mayukh Das | Tanay Dixit | Thomas Dopierre | Paul-Alexis Dray | Suchitra Dubey | Tatiana Ekeinhor | Marco Di Giovanni | Tanya Goyal | Rishabh Gupta | Louanes Hamla | Sang Han | Fabrice Harel-Canada | Antoine Honoré | Ishan Jindal | Przemysław Joniak | Denis Kleyko | Venelin Kovatchev | Kalpesh Krishna | Ashutosh Kumar | Stefan Langer | Seungjae Ryan Lee | Corey James Levinson | Hualou Liang | Kaizhao Liang | Zhexiong Liu | Andrey Lukyanenko | Vukosi Marivate | Gerard de Melo | Simon Meoni | Maxine Meyer | Afnan Mir | Nafise Sadat Moosavi | Niklas Meunnighoff | Timothy Sum Hon Mun | Kenton Murray | Marcin Namysl | Maria Obedkova | Priti Oli | Nivranshu Pasricha | Jan Pfister | Richard Plant | Vinay Prabhu | Vasile Pais | Libo Qin | Shahab Raji | Pawan Kumar Rajpoot | Vikas Raunak | Roy Rinberg | Nicholas Roberts | Juan Diego Rodriguez | Claude Roux | Vasconcellos Samus | Ananya Sai | Robin Schmidt | Thomas Scialom | Tshephisho Sefara | Saqib Shamsi | Xudong Shen | Yiwen Shi | Haoyue Shi | Anna Shvets | Nick Siegel | Damien Sileo | Jamie Simon | Chandan Singh | Roman Sitelew | Priyank Soni | Taylor Sorensen | William Soto | Aman Srivastava | Aditya Srivatsa | Tony Sun | Mukund Varma | A Tabassum | Fiona Tan | Ryan Teehan | Mo Tiwari | Marie Tolkiehn | Athena Wang | Zijian Wang | Zijie Wang | Gloria Wang | Fuxuan Wei | Bryan Wilie | Genta Indra Winata | Xinyu Wu | Witold Wydmanski | Tianbao Xie | Usama Yaseen | Michael Yee | Jing Zhang | Yue Zhang
Northern European Journal of Language Technology, Volume 9
Data augmentation is an important method for evaluating the robustness of and enhancing the diversity of training data for natural language processing (NLP) models. In this paper, we present NL-Augmenter, a new participatory Python-based natural language (NL) augmentation framework which supports the creation of transformations (modifications to the data) and filters (data splits according to specific features). We describe the framework and an initial set of 117 transformations and 23 filters for a variety of NL tasks annotated with noisy descriptive tags. The transformations incorporate noise, intentional and accidental human mistakes, socio-linguistic variation, semantically-valid style, syntax changes, as well as artificial constructs that are unambiguous to humans. We demonstrate the efficacy of NL-Augmenter by using its transformations to analyze the robustness of popular language models. We find different models to be differently challenged on different tasks, with quasi-systematic score decreases. The infrastructure, datacards, and robustness evaluation results are publicly available on GitHub for the benefit of researchers working on paraphrase generation, robustness analysis, and low-resource NLP.
2022
Striking a Balance: Alleviating Inconsistency in Pre-trained Models for Symmetric Classification Tasks
Ashutosh Kumar | Aditya Joshi
Findings of the Association for Computational Linguistics: ACL 2022
Ashutosh Kumar | Aditya Joshi
Findings of the Association for Computational Linguistics: ACL 2022
While fine-tuning pre-trained models for downstream classification is the conventional paradigm in NLP, often task-specific nuances may not get captured in the resultant models. Specifically, for tasks that take two inputs and require the output to be invariant of the order of the inputs, inconsistency is often observed in the predicted labels or confidence scores. We highlight this model shortcoming and apply a consistency loss function to alleviate inconsistency in symmetric classification. Our results show an improved consistency in predictions for three paraphrase detection datasets without a significant drop in the accuracy scores. We examine the classification performance of six datasets (both symmetric and non-symmetric) to showcase the strengths and limitations of our approach.
2020
Syntax-Guided Controlled Generation of Paraphrases
Ashutosh Kumar | Kabir Ahuja | Raghuram Vadapalli | Partha Talukdar
Transactions of the Association for Computational Linguistics, Volume 8
Ashutosh Kumar | Kabir Ahuja | Raghuram Vadapalli | Partha Talukdar
Transactions of the Association for Computational Linguistics, Volume 8
Given a sentence (e.g., “I like mangoes”) and a constraint (e.g., sentiment flip), the goal of controlled text generation is to produce a sentence that adapts the input sentence to meet the requirements of the constraint (e.g., “I hate mangoes”). Going beyond such simple constraints, recent work has started exploring the incorporation of complex syntactic-guidance as constraints in the task of controlled paraphrase generation. In these methods, syntactic-guidance is sourced from a separate exemplar sentence. However, this prior work has only utilized limited syntactic information available in the parse tree of the exemplar sentence. We address this limitation in the paper and propose Syntax Guided Controlled Paraphraser (SGCP), an end-to-end framework for syntactic paraphrase generation. We find that Sgcp can generate syntax-conforming sentences while not compromising on relevance. We perform extensive automated and human evaluations over multiple real-world English language datasets to demonstrate the efficacy of Sgcp over state-of-the-art baselines. To drive future research, we have made Sgcp’s source code available.1
2019
Submodular Optimization-based Diverse Paraphrasing and its Effectiveness in Data Augmentation
Ashutosh Kumar | Satwik Bhattamishra | Manik Bhandari | Partha Talukdar
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Ashutosh Kumar | Satwik Bhattamishra | Manik Bhandari | Partha Talukdar
Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers)
Inducing diversity in the task of paraphrasing is an important problem in NLP with applications in data augmentation and conversational agents. Previous paraphrasing approaches have mainly focused on the issue of generating semantically similar paraphrases while paying little attention towards diversity. In fact, most of the methods rely solely on top-k beam search sequences to obtain a set of paraphrases. The resulting set, however, contains many structurally similar sentences. In this work, we focus on the task of obtaining highly diverse paraphrases while not compromising on paraphrasing quality. We provide a novel formulation of the problem in terms of monotone submodular function maximization, specifically targeted towards the task of paraphrasing. Additionally, we demonstrate the effectiveness of our method for data augmentation on multiple tasks such as intent classification and paraphrase recognition. In order to drive further research, we have made the source code available.
Search
Fix author
Co-authors
- Partha Talukdar 2
- Shailendra Agarwal 1
- Kabir Ahuja 1
- Sajant Anand 1
- Nagender Aneja 1
- Rabin Banjade 1
- Lisa Barthe 1
- Hanna Behnke 1
- Ian Berlot-Attwell 1
- Manik Bhandari 1
- Arindam Bhattacharya 1
- Satwik Bhattamishra 1
- Yaswanth Biruduraju 1
- Connor Boyle 1
- Caroline Brun 1
- Samuel Cahyawijaya 1
- Emile Chapuis 1
- Wanxiang Che 1
- Jinho D. Choi 1
- Mukund Choudhary 1
- Christian Clauss 1
- Pierre Colombo 1
- Filip Cornell 1
- Gautier Dagan 1
- Mayukh Das 1
- Gerard De Melo 1
- Kaustubh Dhole 1
- Marco Di Giovanni 1
- Tanay Dixit 1
- Thomas Dopierre 1
- Paul-Alexis Dray 1
- Suchitra Dubey 1
- Ondřej Dušek 1
- Tatiana Ekeinhor 1
- Varun Gangal 1
- Sebastian Gehrmann 1
- Tanya Goyal 1
- Aadesh Gupta 1
- Rishabh Gupta 1
- Karan Gupta 1
- Louanes Hamla 1
- Sang Han 1
- Fabrice Harel-Canada 1
- Antoine Honoré 1
- Eduard Hovy 1
- Vijay Huddar 1
- Ishan Jindal 1
- Przemysław Joniak 1
- Aditya Joshi 1
- Siva Rajesh Kasa 1
- Santhosh Kumar Kasa 1
- Denis Kleyko 1
- Venelin Kovatchev 1
- Kalpesh Krishna 1
- Stefan Langer 1
- Seungjae Ryan Lee 1
- Corey James Levinson 1
- Zhenhao Li 1
- Hualou Liang 1
- Kaizhao Liang 1
- Zhexiong Liu 1
- Andrey Lukyanenko 1
- Abinaya Mahadiran 1
- Saad Mahamood 1
- Vukosi Marivate 1
- Simon Meoni 1
- Niklas Meunnighoff 1
- Maxine Meyer 1
- Simon Mille 1
- Afnan Mir 1
- Nafise Sadat Moosavi 1
- Timothy Sum Hon Mun 1
- Kenton Murray 1
- Marcin Namysl 1
- Maria Obedkova 1
- Priti Oli 1
- Vasile Pais 1
- Nivranshu Pasricha 1
- Jan Pfister 1
- Richard Plant 1
- Vinay Prabhu 1
- Pattisapu Nikhil Priyatam 1
- Libo Qin 1
- Shahab Raji 1
- Pawan Kumar Rajpoot 1
- Vikas Raunak 1
- Roy Rinberg 1
- Nicholas Roberts 1
- Juan Diego Rodriguez 1
- Claude Roux 1
- Sumegh Roychowdhury 1
- Sebastian Ruder 1
- Ananya Sai 1
- Vasconcellos Samus 1
- Robin Schmidt 1
- Thomas Scialom 1
- Tshephisho Sefara 1
- Saqib Shamsi 1
- Xudong Shen 1
- Yiwen Shi 1
- Freda Shi 1
- Ashish Shrivastava 1
- Anna Shvets 1
- Nick Siegel 1
- Damien Sileo 1
- Jamie Simon 1
- Chandan Singh 1
- Roman Sitelew 1
- Marco Antonio Sobrevilla Cabezudo 1
- Jascha Sohl-Dickstein 1
- Priyank Soni 1
- Taylor Sorensen 1
- William Soto Martinez 1
- Aman Srivastava 1
- Aditya Srivatsa 1
- Tony Sun 1
- A Tabassum 1
- Samson Tan 1
- Fiona Tan 1
- Ryan Teehan 1
- Mo Tiwari 1
- Marie Tolkiehn 1
- Raghuram Vadapalli 1
- Mukund Varma 1
- Athena Wang 1
- Zijian Wang 1
- Zijie Wang 1
- Gloria Wang 1
- Fuxuan Wei 1
- Bryan Wilie 1
- Genta Indra Winata 1
- Tongshang Wu 1
- Xinyu Wu 1
- Witold Wydmanski 1
- Tianbao Xie 1
- Usama Yaseen 1
- Michael Yee 1
- Jing Zhang 1
- Yue Zhang 1