Abstract
We study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pretrained embeddings (e.g., GloVe), and an even simpler baseline—random word embeddings—focusing on the impact of the training set size and the linguistic properties of the task. Surprisingly, we find that both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks. Furthermore, we identify properties of data for which contextual embeddings give particularly large gains: language containing complex structure, ambiguous word usage, and words unseen in training.- Anthology ID:
- 2020.acl-main.236
- Volume:
- Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Dan Jurafsky, Joyce Chai, Natalie Schluter, Joel Tetreault
- Venue:
- ACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2650–2663
- Language:
- URL:
- https://aclanthology.org/2020.acl-main.236
- DOI:
- 10.18653/v1/2020.acl-main.236
- Cite (ACL):
- Simran Arora, Avner May, Jian Zhang, and Christopher Ré. 2020. Contextual Embeddings: When Are They Worth It?. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 2650–2663, Online. Association for Computational Linguistics.
- Cite (Informal):
- Contextual Embeddings: When Are They Worth It? (Arora et al., ACL 2020)
- PDF:
- https://preview.aclanthology.org/ingest-2024-clasp/2020.acl-main.236.pdf
- Data
- CoNLL 2003, GLUE