Yuexin Wu


2018

pdf bib
Unsupervised Cross-lingual Transfer of Word Embedding Spaces
Ruochen Xu | Yiming Yang | Naoki Otani | Yuexin Wu
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing

Cross-lingual transfer of word embeddings aims to establish the semantic mappings among words in different languages by learning the transformation functions over the corresponding word embedding spaces. Successfully solving this problem would benefit many downstream tasks such as to translate text classification models from resource-rich languages (e.g. English) to low-resource languages. Supervised methods for this problem rely on the availability of cross-lingual supervision, either using parallel corpora or bilingual lexicons as the labeled data for training, which may not be available for many low resource languages. This paper proposes an unsupervised learning approach that does not require any cross-lingual labeled data. Given two monolingual word embedding spaces for any language pair, our algorithm optimizes the transformation functions in both directions simultaneously based on distributional matching as well as minimizing the back-translation losses. We use a neural network implementation to calculate the Sinkhorn distance, a well-defined distributional similarity measure, and optimize our objective through back-propagation. Our evaluation on benchmark datasets for bilingual lexicon induction and cross-lingual word similarity prediction shows stronger or competitive performance of the proposed method compared to other state-of-the-art supervised and unsupervised baseline methods over many language pairs.

pdf bib
Contextual Encoding for Translation Quality Estimation
Junjie Hu | Wei-Cheng Chang | Yuexin Wu | Graham Neubig
Proceedings of the Third Conference on Machine Translation: Shared Task Papers

The task of word-level quality estimation (QE) consists of taking a source sentence and machine-generated translation, and predicting which words in the output are correct and which are wrong. In this paper, propose a method to effectively encode the local and global contextual information for each target word using a three-part neural network approach. The first part uses an embedding layer to represent words and their part-of-speech tags in both languages. The second part leverages a one-dimensional convolution layer to integrate local context information for each target word. The third part applies a stack of feed-forward and recurrent neural networks to further encode the global context in the sentence before making the predictions. This model was submitted as the CMU entry to the WMT2018 shared task on QE, and achieves strong results, ranking first in three of the six tracks.

2014

pdf bib
Group based Self Training for E-Commerce Product Record Linkage
Xin Zhao | Yuexin Wu | Hongfei Yan | Xiaoming Li
Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers