Abstract
One of the most powerful features of contextualized models is their dynamic embeddings for words in context, leading to state-of-the-art representations for context-aware lexical semantics. In this paper, we present a post-processing technique that enhances these representations by learning a transformation through static anchors. Our method requires only another pre-trained model and no labeled data is needed. We show consistent improvement in a range of benchmark tasks that test contextual variations of meaning both across different usages of a word and across different words as they are used in context. We demonstrate that while the original contextual representations can be improved by another embedding space from both contextualized and static models, the static embeddings, which have lower computational requirements, provide the most gains.- Anthology ID:
- 2020.emnlp-main.333
- Volume:
- Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4066–4075
- Language:
- URL:
- https://aclanthology.org/2020.emnlp-main.333
- DOI:
- 10.18653/v1/2020.emnlp-main.333
- Cite (ACL):
- Qianchu Liu, Diana McCarthy, and Anna Korhonen. 2020. Towards Better Context-aware Lexical Semantics:Adjusting Contextualized Representations through Static Anchors. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4066–4075, Online. Association for Computational Linguistics.
- Cite (Informal):
- Towards Better Context-aware Lexical Semantics:Adjusting Contextualized Representations through Static Anchors (Liu et al., EMNLP 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2020.emnlp-main.333.pdf