Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour

Fangyu Liu, Julian Eisenschlos, Jeremy Cole, Nigel Collier


Abstract
Language models (LMs) trained on raw texts have no direct access to the physical world. Gordon and Van Durme (2013) point out that LMs can thus suffer from reporting bias: texts rarely report on common facts, instead focusing on the unusual aspects of a situation. If LMs are only trained on text corpora and naively memorise local co-occurrence statistics, they thus naturally would learn a biased view of the physical world. While prior studies have repeatedly verified that LMs of smaller scales (e.g., RoBERTa, GPT-2) amplify reporting bias, it remains unknown whether such trends continue when models are scaled up. We investigate reporting bias from the perspective of colour in larger language models (LLMs) such as PaLM and GPT-3. Specifically, we query LLMs for the typical colour of objects, which is one simple type of perceptually grounded physical common sense. Surprisingly, we find that LLMs significantly outperform smaller LMs in determining an object’s typical colour and more closely track human judgments, instead of overfitting to surface patterns stored in texts. This suggests that very large models of language alone are able to overcome certain types of reporting bias that are characterized by local co-occurrences.
Anthology ID:
2022.aacl-short.27
Volume:
Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
Month:
November
Year:
2022
Address:
Online only
Venues:
AACL | IJCNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
210–220
Language:
URL:
https://aclanthology.org/2022.aacl-short.27
DOI:
Bibkey:
Cite (ACL):
Fangyu Liu, Julian Eisenschlos, Jeremy Cole, and Nigel Collier. 2022. Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour. In Proceedings of the 2nd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics and the 12th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 210–220, Online only. Association for Computational Linguistics.
Cite (Informal):
Do ever larger octopi still amplify reporting biases? Evidence from judgments of typical colour (Liu et al., AACL-IJCNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.aacl-short.27.pdf