@inproceedings{goel-etal-2019-pre,
    title = "How Pre-trained Word Representations Capture Commonsense Physical Comparisons",
    author = "Goel, Pranav  and
      Feng, Shi  and
      Boyd-Graber, Jordan",
    editor = "Ostermann, Simon  and
      Zhang, Sheng  and
      Roth, Michael  and
      Clark, Peter",
    booktitle = "Proceedings of the First Workshop on Commonsense Inference in Natural Language Processing",
    month = nov,
    year = "2019",
    address = "Hong Kong, China",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/iwcs-25-ingestion/D19-6016/",
    doi = "10.18653/v1/D19-6016",
    pages = "130--135",
    abstract = "Understanding common sense is important for effective natural language reasoning. One type of common sense is how two objects compare on physical properties such as size and weight: e.g., `is a house bigger than a person?'. We probe whether pre-trained representations capture comparisons and find they, in fact, have higher accuracy than previous approaches. They also generalize to comparisons involving objects not seen during training. We investigate \textit{how} such comparisons are made: models learn a consistent ordering over all the objects in the comparisons. Probing models have significantly higher accuracy than those baseline models which use dataset artifacts: e.g., memorizing some words are larger than any other word."
}Markdown (Informal)
[How Pre-trained Word Representations Capture Commonsense Physical Comparisons](https://preview.aclanthology.org/iwcs-25-ingestion/D19-6016/) (Goel et al., 2019)
ACL