Abstract
Can we get existing language models and refine them for zero-shot commonsense reasoning? This paper presents an initial study exploring the feasibility of zero-shot commonsense reasoning for the Winograd Schema Challenge by formulating the task as self-supervised refinement of a pre-trained language model. In contrast to previous studies that rely on fine-tuning annotated datasets, we seek to boost conceptualization via loss landscape refinement. To this end, we propose a novel self-supervised learning approach that refines the language model utilizing a set of linguistic perturbations of similar concept relationships. Empirical analysis of our conceptually simple framework demonstrates the viability of zero-shot commonsense reasoning on multiple benchmarks.- Anthology ID:
- 2021.emnlp-main.688
- Volume:
- Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2021
- Address:
- Online and Punta Cana, Dominican Republic
- Editors:
- Marie-Francine Moens, Xuanjing Huang, Lucia Specia, Scott Wen-tau Yih
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 8737–8743
- Language:
- URL:
- https://aclanthology.org/2021.emnlp-main.688
- DOI:
- 10.18653/v1/2021.emnlp-main.688
- Cite (ACL):
- Tassilo Klein and Moin Nabi. 2021. Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, pages 8737–8743, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- Towards Zero-shot Commonsense Reasoning with Self-supervised Refinement of Language Models (Klein & Nabi, EMNLP 2021)
- PDF:
- https://preview.aclanthology.org/improve-issue-templates/2021.emnlp-main.688.pdf
- Code
- sap-samples/emnlp2021-contrastive-refinement
- Data
- GAP Coreference Dataset, WSC, WinoBias, WinoGrande