Multilingual Model Using Cross-Task Embedding Projection

Jin Sakuma, Naoki Yoshinaga


Abstract
We present a method for applying a neural network trained on one (resource-rich) language for a given task to other (resource-poor) languages. We accomplish this by inducing a mapping from pre-trained cross-lingual word embeddings to the embedding layer of the neural network trained on the resource-rich language. To perform element-wise cross-task embedding projection, we invent locally linear mapping which assumes and preserves the local topology across the semantic spaces before and after the projection. Experimental results on topic classification task and sentiment analysis task showed that the fully task-specific multilingual model obtained using our method outperformed the existing multilingual models with embedding layers fixed to pre-trained cross-lingual word embeddings.
Anthology ID:
K19-1003
Volume:
Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)
Month:
November
Year:
2019
Address:
Hong Kong, China
Venue:
CoNLL
SIG:
SIGNLL
Publisher:
Association for Computational Linguistics
Note:
Pages:
22–32
Language:
URL:
https://aclanthology.org/K19-1003
DOI:
10.18653/v1/K19-1003
Bibkey:
Cite (ACL):
Jin Sakuma and Naoki Yoshinaga. 2019. Multilingual Model Using Cross-Task Embedding Projection. In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pages 22–32, Hong Kong, China. Association for Computational Linguistics.
Cite (Informal):
Multilingual Model Using Cross-Task Embedding Projection (Sakuma & Yoshinaga, CoNLL 2019)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/K19-1003.pdf