Nihal V. Nayak
2026
Revisiting Generalization Across Difficulty Levels: It’s Not So Easy
Yeganeh Kordi | Nihal V. Nayak | Max Zuo | Ilana Nguyen | Stephen Bach
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Yeganeh Kordi | Nihal V. Nayak | Max Zuo | Ilana Nguyen | Stephen Bach
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
We investigate how well large language models (LLMs) generalize across different task difficulties, a key question for effective data curation and evaluation. Existing research is mixed regarding whether training on easier or harder data leads to better results, and whether those gains come on easier or harder test data. We address this question by conducting a systematic evaluation of LLMs’ generalization across models, datasets, and fine-grained groups of example difficulty. We rank examples in six datasets using the outputs of thousands of different LLMs and Item Response Theory (IRT), a well-established difficulty metric in educational testing. Unlike prior work, our difficulty ratings are therefore determined solely by the abilities of many different LLMs, excluding human opinions of difficulty. With a more objective, larger-scale, and finer-grained analysis, we show that cross-difficulty generalization is often limited; training on either easy or hard data cannot achieve consistent improvements across the full range of difficulties. These results show the importance of having a range of difficulties in both training and evaluation data for LLMs, and that taking shortcuts with respect to difficulty is risky.
2022
PromptSource: An Integrated Development Environment and Repository for Natural Language Prompts
Stephen H. Bach | Victor Sanh | Zheng-Xin Yong | Albert Webson | Colin Raffel | Nihal V. Nayak | Abheesht Sharma | Taewoon Kim | M Saiful Bari | Thibault Fevry | Zaid Alyafeai | Manan Dey | Andrea Santilli | Zhiqing Sun | Srulik Ben-David | Canwen Xu | Gunjan Chhablani | Han Wang | Jason Alan Fries | Maged S. Al-shaibani | Shanya Sharma | Urmish Thakker | Khalid Almubarak | Xiangru Tang | Dragomir Radev | Mike Tian-Jian Jiang | Alexander M. Rush
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
Stephen H. Bach | Victor Sanh | Zheng-Xin Yong | Albert Webson | Colin Raffel | Nihal V. Nayak | Abheesht Sharma | Taewoon Kim | M Saiful Bari | Thibault Fevry | Zaid Alyafeai | Manan Dey | Andrea Santilli | Zhiqing Sun | Srulik Ben-David | Canwen Xu | Gunjan Chhablani | Han Wang | Jason Alan Fries | Maged S. Al-shaibani | Shanya Sharma | Urmish Thakker | Khalid Almubarak | Xiangru Tang | Dragomir Radev | Mike Tian-Jian Jiang | Alexander M. Rush
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics: System Demonstrations
PromptSource is a system for creating, sharing, and using natural language prompts. Prompts are functions that map an example from a dataset to a natural language input and target output. Using prompts to train and query language models is an emerging area in NLP that requires new tools that let users develop and refine these prompts collaboratively. PromptSource addresses the emergent challenges in this new setting with (1) a templating language for defining data-linked prompts, (2) an interface that lets users quickly iterate on prompt development by observing outputs of their prompts on many examples, and (3) a community-driven set of guidelines for contributing new prompts to a common pool. Over 2,000 prompts for roughly 170 datasets are already available in PromptSource. PromptSource is available at https://github.com/bigscience-workshop/promptsource.
2019
Study on Unsupervised Statistical Machine Translation for Backtranslation
Anush Kumar | Nihal V. Nayak | Aditya Chandra | Mydhili K. Nair
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Anush Kumar | Nihal V. Nayak | Aditya Chandra | Mydhili K. Nair
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2019)
Machine Translation systems have drastically improved over the years for several language pairs. Monolingual data is often used to generate synthetic sentences to augment the training data which has shown to improve the performance of machine translation models. In our paper, we make use of an Unsupervised Statistical Machine Translation (USMT) to generate synthetic sentences. Our study compares the performance improvements in Neural Machine Translation model when using synthetic sentences from supervised and unsupervised Machine Translation models. Our approach of using USMT for backtranslation shows promise in low resource conditions and achieves an improvement of 3.2 BLEU score over the Neural Machine Translation model.
2018
Context Based Approach for Second Language Acquisition
Nihal V. Nayak | Arjun R. Rao
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
Nihal V. Nayak | Arjun R. Rao
Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications
SLAM 2018 focuses on predicting a student’s mistake while using the Duolingo application. In this paper, we describe the system we developed for this shared task. Our system uses a logistic regression model to predict the likelihood of a student making a mistake while answering an exercise on Duolingo in all three language tracks - English/Spanish (en/es), Spanish/English (es/en) and French/English (fr/en). We conduct an ablation study with several features during the development of this system and discover that context based features plays a major role in language acquisition modeling. Our model beats Duolingo’s baseline scores in all three language tracks (AUROC scores for en/es = 0.821, es/en = 0.790 and fr/en = 0.812). Our work makes a case for providing favourable textual context for students while learning second language.
2017
Search
Fix author
Co-authors
- Maged S. Al-shaibani 1
- Khalid Almubarak 1
- Zaid Alyafeai 1
- Stephen H. Bach 1
- Stephen Bach 1
- M Saiful Bari 1
- Srulik Ben-David 1
- Aditya Chandra 1
- Gunjan Chhablani 1
- Tanmay Chinchore 1
- Manan Dey 1
- Jason Alan Fries 1
- Thibault Févry 1
- Aishwarya Hanumanth Rao 1
- H. S. Jamadagni 1
- Mike Tian-Jian Jiang 1
- Taewoon Kim 1
- Yeganeh Kordi 1
- Anush Kumar 1
- G. M. Lingaraju 1
- Shane Michael Martin 1
- Mydhili K. Nair 1
- Ilana Nguyen 1
- Dragomir Radev 1
- Colin Raffel 1
- Arjun R. Rao 1
- Alexander M. Rush 1
- Victor Sanh 1
- Andrea Santilli 1
- Abheesht Sharma 1
- Shanya Sharma 1
- Sagar Nagaraj Simha 1
- Zhiqing Sun 1
- Xiangru Tang 1
- Urmish Thakker 1
- Han Wang (王涵) 1
- Albert Webson 1
- Canwen Xu 1
- Zheng Xin Yong 1
- Max Zuo 1