Using Paraphrases to Study Properties of Contextual Embeddings

Laura Burdick, Jonathan K. Kummerfeld, Rada Mihalcea


Abstract
We use paraphrases as a unique source of data to analyze contextualized embeddings, with a particular focus on BERT. Because paraphrases naturally encode consistent word and phrase semantics, they provide a unique lens for investigating properties of embeddings. Using the Paraphrase Database’s alignments, we study words within paraphrases as well as phrase representations. We find that contextual embeddings effectively handle polysemous words, but give synonyms surprisingly different representations in many cases. We confirm previous findings that BERT is sensitive to word order, but find slightly different patterns than prior work in terms of the level of contextualization across BERT’s layers.
Anthology ID:
2022.naacl-main.338
Volume:
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Month:
July
Year:
2022
Address:
Seattle, United States
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4558–4568
Language:
URL:
https://aclanthology.org/2022.naacl-main.338
DOI:
10.18653/v1/2022.naacl-main.338
Bibkey:
Cite (ACL):
Laura Burdick, Jonathan K. Kummerfeld, and Rada Mihalcea. 2022. Using Paraphrases to Study Properties of Contextual Embeddings. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4558–4568, Seattle, United States. Association for Computational Linguistics.
Cite (Informal):
Using Paraphrases to Study Properties of Contextual Embeddings (Burdick et al., NAACL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/starsem-semeval-split/2022.naacl-main.338.pdf
Video:
 https://preview.aclanthology.org/starsem-semeval-split/2022.naacl-main.338.mp4