Xiaoyu Lv
2022
Unsupervised Preference-Aware Language Identification
Xingzhang Ren
|
Baosong Yang
|
Dayiheng Liu
|
Haibo Zhang
|
Xiaoyu Lv
|
Liang Yao
|
Jun Xie
Findings of the Association for Computational Linguistics: ACL 2022
Recognizing the language of ambiguous texts has become a main challenge in language identification (LID). When using multilingual applications, users have their own language preferences, which can be regarded as external knowledge for LID. Nevertheless, current studies do not consider the inter-personal variations due to the lack of user annotated training data. To fill this gap, we introduce preference-aware LID and propose a novel unsupervised learning strategy. Concretely, we construct pseudo training set for each user by extracting training samples from a standard LID corpus according to his/her historical language distribution. Besides, we contribute the first user labeled LID test set called “U-LID”. Experimental results reveal that our model can incarnate user traits and significantly outperforms existing LID systems on handling ambiguous texts. Our code and benchmark have been released.
Effective Approaches to Neural Query Language Identification
Xingzhang Ren
|
Baosong Yang
|
Dayiheng Liu
|
Haibo Zhang
|
Xiaoyu Lv
|
Liang Yao
|
Jun Xie
Computational Linguistics, Volume 48, Issue 4 - December 2022
Query language identification (Q-LID) plays a crucial role in a cross-lingual search engine. There exist two main challenges in Q-LID: (1) insufficient contextual information in queries for disambiguation; and (2) the lack of query-style training examples for low-resource languages. In this article, we propose a neural Q-LID model by alleviating the above problems from both model architecture and data augmentation perspectives. Concretely, we build our model upon the advanced Transformer model. In order to enhance the discrimination of queries, a variety of external features (e.g., character, word, as well as script) are fed into the model and fused by a multi-scale attention mechanism. Moreover, to remedy the low resource challenge in this task, a novel machine translation–based strategy is proposed to automatically generate synthetic query-style data for low-resource languages. We contribute the first Q-LID test set called QID-21, which consists of search queries in 21 languages. Experimental results reveal that our model yields better classification accuracy than strong baselines and existing LID systems on both query and traditional LID tasks.1
2018
Alibaba Submission to the WMT18 Parallel Corpus Filtering Task
Jun Lu
|
Xiaoyu Lv
|
Yangbin Shi
|
Boxing Chen
Proceedings of the Third Conference on Machine Translation: Shared Task Papers
This paper describes the Alibaba Machine Translation Group submissions to the WMT 2018 Shared Task on Parallel Corpus Filtering. While evaluating the quality of the parallel corpus, the three characteristics of the corpus are investigated, i.e. 1) the bilingual/translation quality, 2) the monolingual quality and 3) the corpus diversity. Both rule-based and model-based methods are adapted to score the parallel sentence pairs. The final parallel corpus filtering system is reliable, easy to build and adapt to other language pairs.
Search
Co-authors
- Xingzhang Ren 2
- Baosong Yang* 2
- Dayiheng Liu 2
- Haibo Zhang 2
- Liang Yao 2
- show all...