SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings

Cindy Aloui, Carlos Ramisch, Alexis Nasr, Lucie Barque


Abstract
Contextualised embeddings such as BERT have become de facto state-of-the-art references in many NLP applications, thanks to their impressive performances. However, their opaqueness makes it hard to interpret their behaviour. SLICE is a hybrid model that combines supersense labels with contextual embeddings. We introduce a weakly supervised method to learn interpretable embeddings from raw corpora and small lists of seed words. Our model is able to represent both a word and its context as embeddings into the same compact space, whose dimensions correspond to interpretable supersenses. We assess the model in a task of supersense tagging for French nouns. The little amount of supervision required makes it particularly well suited for low-resourced scenarios. Thanks to its interpretability, we perform linguistic analyses about the predicted supersenses in terms of input word and context representations.
Anthology ID:
2020.coling-main.298
Volume:
Proceedings of the 28th International Conference on Computational Linguistics
Month:
December
Year:
2020
Address:
Barcelona, Spain (Online)
Editors:
Donia Scott, Nuria Bel, Chengqing Zong
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3357–3370
Language:
URL:
https://aclanthology.org/2020.coling-main.298
DOI:
10.18653/v1/2020.coling-main.298
Bibkey:
Cite (ACL):
Cindy Aloui, Carlos Ramisch, Alexis Nasr, and Lucie Barque. 2020. SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings. In Proceedings of the 28th International Conference on Computational Linguistics, pages 3357–3370, Barcelona, Spain (Online). International Committee on Computational Linguistics.
Cite (Informal):
SLICE: Supersense-based Lightweight Interpretable Contextual Embeddings (Aloui et al., COLING 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/emnlp-22-attachments/2020.coling-main.298.pdf