Qingnan Jiang
2021
Learning Kernel-Smoothed Machine Translation with Retrieved Examples
Qingnan Jiang
|
Mingxuan Wang
|
Jun Cao
|
Shanbo Cheng
|
Shujian Huang
|
Lei Li
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
How to effectively adapt neural machine translation (NMT) models according to emerging cases without retraining? Despite the great success of neural machine translation, updating the deployed models online remains a challenge. Existing non-parametric approaches that retrieve similar examples from a database to guide the translation process are promising but are prone to overfit the retrieved examples. However, non-parametric methods are prone to overfit the retrieved examples. In this work, we propose to learn Kernel-Smoothed Translation with Example Retrieval (KSTER), an effective approach to adapt neural machine translation models online. Experiments on domain adaptation and multi-domain machine translation datasets show that even without expensive retraining, KSTER is able to achieve improvement of 1.1 to 1.5 BLEU scores over the best existing online adaptation methods. The code and trained models are released at https://github.com/jiangqn/KSTER.
2019
A Challenge Dataset and Effective Models for Aspect-Based Sentiment Analysis
Qingnan Jiang
|
Lei Chen
|
Ruifeng Xu
|
Xiang Ao
|
Min Yang
Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)
Aspect-based sentiment analysis (ABSA) has attracted increasing attention recently due to its broad applications. In existing ABSA datasets, most sentences contain only one aspect or multiple aspects with the same sentiment polarity, which makes ABSA task degenerate to sentence-level sentiment analysis. In this paper, we present a new large-scale Multi-Aspect Multi-Sentiment (MAMS) dataset, in which each sentence contains at least two different aspects with different sentiment polarities. The release of this dataset would push forward the research in this field. In addition, we propose simple yet effective CapsNet and CapsNet-BERT models which combine the strengths of recent NLP advances. Experiments on our new dataset show that the proposed model significantly outperforms the state-of-the-art baseline methods
Search
Co-authors
- Mingxuan Wang 1
- Jun Cao 1
- Shanbo Cheng 1
- Shujian Huang 1
- Lei Li 1
- show all...