SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings

Jan Engler, Sandipan Sikdar, Marlene Lutz, Markus Strohmaier


Abstract
Adding interpretability to word embeddings represents an area of active research in textrepresentation. Recent work has explored the potential of embedding words via so-called polardimensions (e.g. good vs. bad, correct vs. wrong). Examples of such recent approachesinclude SemAxis, POLAR, FrameAxis, and BiImp. Although these approaches provide interpretabledimensions for words, they have not been designed to deal with polysemy, i.e. they can not easily distinguish between different senses of words. To address this limitation, we present SensePOLAR, an extension of the original POLAR framework that enables wordsense aware interpretability for pre-trained contextual word embeddings. The resulting interpretable word embeddings achieve a level ofperformance that is comparable to original contextual word embeddings across a variety ofnatural language processing tasks including the GLUE and SQuAD benchmarks. Our workremoves a fundamental limitation of existing approaches by offering users sense aware interpretationsfor contextual word embeddings.
Anthology ID:
2022.findings-emnlp.338
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4607–4619
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.338
DOI:
10.18653/v1/2022.findings-emnlp.338
Bibkey:
Cite (ACL):
Jan Engler, Sandipan Sikdar, Marlene Lutz, and Markus Strohmaier. 2022. SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 4607–4619, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
SensePOLAR: Word sense aware interpretability for pre-trained contextual word embeddings (Engler et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.findings-emnlp.338.pdf
Video:
 https://preview.aclanthology.org/nschneid-patch-4/2022.findings-emnlp.338.mp4