Abstract
Cross-lingual embeddings aim to represent words in multiple languages in a shared vector space by capturing semantic similarities across languages. They are a crucial component for scaling tasks to multiple languages by transferring knowledge from languages with rich resources to low-resource languages. A common approach to learning cross-lingual embeddings is to train monolingual embeddings separately for each language and learn a linear projection from the monolingual spaces into a shared space, where the mapping relies on a small seed dictionary. While there are high-quality generic seed dictionaries and pre-trained cross-lingual embeddings available for many language pairs, there is little research on how they perform on specialised tasks. In this paper, we investigate the best practices for constructing the seed dictionary for a specific domain. We evaluate the embeddings on the sequence labelling task of Curriculum Vitae parsing and show that the size of a bilingual dictionary, the frequency of the dictionary words in the domain corpora and the source of data (task-specific vs generic) influence performance. We also show that the less training data is available in the low-resource language, the more the construction of the bilingual dictionary matters, and demonstrate that some of the choices are crucial in the zero-shot transfer learning case.- Anthology ID:
- W19-4327
- Volume:
- Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
- Month:
- August
- Year:
- 2019
- Address:
- Florence, Italy
- Editors:
- Isabelle Augenstein, Spandana Gella, Sebastian Ruder, Katharina Kann, Burcu Can, Johannes Welbl, Alexis Conneau, Xiang Ren, Marek Rei
- Venue:
- RepL4NLP
- SIG:
- SIGREP
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 230–234
- Language:
- URL:
- https://aclanthology.org/W19-4327
- DOI:
- 10.18653/v1/W19-4327
- Cite (ACL):
- Lena Shakurova, Beata Nyari, Chao Li, and Mihai Rotaru. 2019. Best Practices for Learning Domain-Specific Cross-Lingual Embeddings. In Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019), pages 230–234, Florence, Italy. Association for Computational Linguistics.
- Cite (Informal):
- Best Practices for Learning Domain-Specific Cross-Lingual Embeddings (Shakurova et al., RepL4NLP 2019)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/W19-4327.pdf