Abstract
We show how to adapt bilingual word embeddings (BWE’s) to bootstrap a cross-lingual name-entity recognition (NER) system in a language with no labeled data. We assume a setting where we are given a comparable corpus with NER labels for the source language only; our goal is to build a NER model for the target language. The proposed multi-task model jointly trains bilingual word embeddings while optimizing a NER objective. This creates word embeddings that are both shared between languages and fine-tuned for the NER task.- Anthology ID:
- I17-2065
- Volume:
- Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
- Month:
- November
- Year:
- 2017
- Address:
- Taipei, Taiwan
- Venue:
- IJCNLP
- SIG:
- Publisher:
- Asian Federation of Natural Language Processing
- Note:
- Pages:
- 383–388
- Language:
- URL:
- https://aclanthology.org/I17-2065
- DOI:
- Cite (ACL):
- Dingquan Wang, Nanyun Peng, and Kevin Duh. 2017. A Multi-task Learning Approach to Adapting Bilingual Word Embeddings for Cross-lingual Named Entity Recognition. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 383–388, Taipei, Taiwan. Asian Federation of Natural Language Processing.
- Cite (Informal):
- A Multi-task Learning Approach to Adapting Bilingual Word Embeddings for Cross-lingual Named Entity Recognition (Wang et al., IJCNLP 2017)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/I17-2065.pdf