Decoding Symbolism in Language Models

Meiqi Guo, Rebecca Hwa, Adriana Kovashka


Abstract
This work explores the feasibility of eliciting knowledge from language models (LMs) to decode symbolism, recognizing something (e.g.,roses) as a stand-in for another (e.g., love). We present our evaluative framework, Symbolism Analysis (SymbA), which compares LMs (e.g., RoBERTa, GPT-J) on different types of symbolism and analyze the outcomes along multiple metrics. Our findings suggest that conventional symbols are more reliably elicited from LMs while situated symbols are more challenging. Results also reveal the negative impact of the bias in pre-trained corpora. We further demonstrate that a simple re-ranking strategy can mitigate the bias and significantly improve model performances to be on par with human performances in some cases.
Anthology ID:
2023.acl-long.186
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3311–3324
Language:
URL:
https://aclanthology.org/2023.acl-long.186
DOI:
10.18653/v1/2023.acl-long.186
Bibkey:
Cite (ACL):
Meiqi Guo, Rebecca Hwa, and Adriana Kovashka. 2023. Decoding Symbolism in Language Models. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3311–3324, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Decoding Symbolism in Language Models (Guo et al., ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/dois-2013-emnlp/2023.acl-long.186.pdf
Video:
 https://preview.aclanthology.org/dois-2013-emnlp/2023.acl-long.186.mp4