POSQA: Probe the World Models of LLMs with Size Comparisons

Chang Shu, Jiuzhou Han, Fangyu Liu, Ehsan Shareghi, Nigel Collier


Abstract
Embodied language comprehension emphasizes that language understanding is not solely a matter of mental processing in the brain but also involves interactions with the physical and social environment. With the explosive growth of Large Language Models (LLMs) and their already ubiquitous presence in our daily lives, it is becoming increasingly necessary to verify their real-world understanding. Inspired by cognitive theories, we propose POSQA: a Physical Object Size Question Answering dataset with simple size comparison questions to examine the extremity and analyze the potential mechanisms of the embodied comprehension of the latest LLMs. We show that even the largest LLMs today perform poorly under the zero-shot setting. We then push their limits with advanced prompting techniques and external knowledge augmentation. Furthermore, we investigate whether their real-world comprehension primarily derives from contextual information or internal weights and analyse the impact of prompt formats and report bias of different objects. Our results show that real-world understanding that LLMs shaped from textual data can be vulnerable to deception and confusion by the surface form of prompts, which makes it less aligned with human behaviours.
Anthology ID:
2023.findings-emnlp.504
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2023
Month:
December
Year:
2023
Address:
Singapore
Editors:
Houda Bouamor, Juan Pino, Kalika Bali
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7518–7531
Language:
URL:
https://aclanthology.org/2023.findings-emnlp.504
DOI:
10.18653/v1/2023.findings-emnlp.504
Bibkey:
Cite (ACL):
Chang Shu, Jiuzhou Han, Fangyu Liu, Ehsan Shareghi, and Nigel Collier. 2023. POSQA: Probe the World Models of LLMs with Size Comparisons. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 7518–7531, Singapore. Association for Computational Linguistics.
Cite (Informal):
POSQA: Probe the World Models of LLMs with Size Comparisons (Shu et al., Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2023.findings-emnlp.504.pdf