Hyunchang Cho
2022
Specializing Multi-domain NMT via Penalizing Low Mutual Information
Jiyoung Lee
|
Hantae Kim
|
Hyunchang Cho
|
Edward Choi
|
Cheonbok Park
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Multi-domain Neural Machine Translation (NMT) trains a single model with multiple domains. It is appealing because of its efficacy in handling multiple domains within one model. An ideal multi-domain NMT learns distinctive domain characteristics simultaneously, however, grasping the domain peculiarity is a non-trivial task. In this paper, we investigate domain-specific information through the lens of mutual information (MI) and propose a new objective that penalizes low MI to become higher.Our method achieved the state-of-the-art performance among the current competitive multi-domain NMT models. Also, we show our objective promotes low MI to be higher resulting in domain-specialized multi-domain NMT.
2021
Papago’s Submissions to the WMT21 Triangular Translation Task
Jeonghyeok Park
|
Hyunjoong Kim
|
Hyunchang Cho
Proceedings of the Sixth Conference on Machine Translation
This paper describes Naver Papago’s submission to the WMT21 shared triangular MT task to enhance the non-English MT system with tri-language parallel data. The provided parallel data are Russian-Chinese (direct), Russian-English (indirect), and English-Chinese (indirect) data. This task aims to improve the quality of the Russian-to-Chinese MT system by exploiting the direct and indirect parallel re- sources. The direct parallel data is noisy data crawled from the web. To alleviate the issue, we conduct extensive experiments to find effective data filtering methods. With the empirical knowledge that the performance of bilingual MT is better than multi-lingual MT and related experiment results, we approach this task as bilingual MT, where the two indirect data are transformed to direct data. In addition, we use the Transformer, a robust translation model, as our baseline and integrate several techniques, averaging checkpoints, model ensemble, and re-ranking. Our final system provides a 12.7 BLEU points improvement over a baseline system on the WMT21 triangular MT development set. In the official evalua- tion of the test set, ours is ranked 2nd in terms of BLEU scores.
2020
Revisiting Round-trip Translation for Quality Estimation
Jihyung Moon
|
Hyunchang Cho
|
Eunjeong L. Park
Proceedings of the 22nd Annual Conference of the European Association for Machine Translation
Quality estimation (QE), a task of evaluating the quality of translation automatically without human-translated reference, is one of the important challenges for machine translation (MT). As the QE methods, BLEU score for round-trip translation (RTT) had been considered. However, it was found to be a poor predictor of translation quality since BLEU was not an adequate metric to detect semantic similarity between input and RTT. Recently, the pre-trained language models have made breakthroughs in many NLP tasks by providing semantically meaningful word and sentence embeddings. In this paper, we employ the semantic embeddings to RTT-based QE metric. Our method achieves the highest correlations with human judgments compared to WMT 2019 quality estimation metric task submissions. Additionally, we observe that with semantic-level metrics, RTT-based QE is robust to the choice of a backward translation system and shows consistent performance on both SMT and NMT forward translation systems.
Search
Co-authors
- Jiyoung Lee 1
- Hantae Kim 1
- Edward Choi 1
- Cheonbok Park 1
- Jihyung Moon 1
- show all...