Scientific and Creative Analogies in Pretrained Language Models
Tamara Czinczoll, Helen Yannakoudakis, Pushkar Mishra, Ekaterina Shutova
Abstract
This paper examines the encoding of analogy in large-scale pretrained language models, such as BERT and GPT-2. Existing analogy datasets typically focus on a limited set of analogical relations, with a high similarity of the two domains between which the analogy holds. As a more realistic setup, we introduce the Scientific and Creative Analogy dataset (SCAN), a novel analogy dataset containing systematic mappings of multiple attributes and relational structures across dissimilar domains. Using this dataset, we test the analogical reasoning capabilities of several widely-used pretrained language models (LMs). We find that state-of-the-art LMs achieve low performance on these complex analogy tasks, highlighting the challenges still posed by analogy understanding.- Anthology ID:
- 2022.findings-emnlp.153
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2094–2100
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.153
- DOI:
- Cite (ACL):
- Tamara Czinczoll, Helen Yannakoudakis, Pushkar Mishra, and Ekaterina Shutova. 2022. Scientific and Creative Analogies in Pretrained Language Models. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2094–2100, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Scientific and Creative Analogies in Pretrained Language Models (Czinczoll et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2022.findings-emnlp.153.pdf