Sanjiban Choudhury
2024
UNcommonsense Reasoning: Abductive Reasoning about Uncommon Situations
Wenting Zhao
|
Justin Chiu
|
Jena Hwang
|
Faeze Brahman
|
Jack Hessel
|
Sanjiban Choudhury
|
Yejin Choi
|
Xiang Li
|
Alane Suhr
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Language technologies that accurately model the dynamics of events must perform commonsense reasoning. Existing work evaluating commonsense reasoning focuses on making inferences about common, everyday situations. To instead investigate the ability to model unusual, unexpected, and unlikely situations, we explore the task of uncommonsense abductive reasoning. Given a piece of context with an unexpected outcome, this task requires reasoning abductively to generate an explanation that makes the unexpected outcome more likely in the context. To this end, we curate and release a new English language corpus called UNcommonsense. We characterize the performance differences between human explainers and the best-performing large language models, finding that model-enhanced human-written explanations achieve the highest quality by trading off between specificity and diversity. Finally, we experiment with several imitation learning algorithms to train open and accessible language models on this task. When compared with the vanilla supervised fine-tuning approach, these methods consistently reduce lose rates on both common and uncommonsense abductive reasoning judged by human evaluators.
Search
Co-authors
- Wenting Zhao 1
- Justin Chiu 1
- Jena Hwang 1
- Faeze Brahman 1
- Jack Hessel 1
- show all...