Abstract
Understanding common sense is important for effective natural language reasoning. One type of common sense is how two objects compare on physical properties such as size and weight: e.g., ‘is a house bigger than a person?’. We probe whether pre-trained representations capture comparisons and find they, in fact, have higher accuracy than previous approaches. They also generalize to comparisons involving objects not seen during training. We investigate how such comparisons are made: models learn a consistent ordering over all the objects in the comparisons. Probing models have significantly higher accuracy than those baseline models which use dataset artifacts: e.g., memorizing some words are larger than any other word.- Anthology ID:
- D19-6016
- Volume:
- Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing
- Month:
- November
- Year:
- 2019
- Address:
- Hong Kong, China
- Editors:
- Simon Ostermann, Sheng Zhang, Michael Roth, Peter Clark
- Venue:
- WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 130–135
- Language:
- URL:
- https://aclanthology.org/D19-6016
- DOI:
- 10.18653/v1/D19-6016
- Cite (ACL):
- Pranav Goel, Shi Feng, and Jordan Boyd-Graber. 2019. How Pre-trained Word Representations Capture Commonsense Physical Comparisons. In Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing, pages 130–135, Hong Kong, China. Association for Computational Linguistics.
- Cite (Informal):
- How Pre-trained Word Representations Capture Commonsense Physical Comparisons (Goel et al., 2019)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/D19-6016.pdf