Jeonghyeok Park


2022

pdf
Papago’s Submission to the WMT22 Quality Estimation Shared Task
Seunghyun Lim | Jeonghyeok Park
Proceedings of the Seventh Conference on Machine Translation (WMT)

This paper describes anonymous submission to the WMT 2022 Quality Estimation shared task. We participate in Task 1: Quality Prediction for both sentence and word-level quality prediction tasks. Our system is a multilingual and multi-task model, whereby a single system can infer both sentence and word-level quality on multiple language pairs. Our system’s architecture consists of Pretrained Language Model (PLM) and task layers, and is jointly optimized for both sentence and word-level quality prediction tasks using multilingual dataset. We propose novel auxiliary tasks for training and explore diverse sources of additional data to demonstrate further improvements on performance. Through ablation study, we examine the effectiveness of proposed components and find optimal configurations to train our submission systems under each language pair and task settings. Finally, submission systems are trained and inferenced using K-folds ensemble. Our systems greatly outperform task organizer’s baseline and achieve comparable performance against other participants’ submissions in both sentence and word-level quality prediction tasks.

2021

pdf
Papago’s Submissions to the WMT21 Triangular Translation Task
Jeonghyeok Park | Hyunjoong Kim | Hyunchang Cho
Proceedings of the Sixth Conference on Machine Translation

This paper describes Naver Papago’s submission to the WMT21 shared triangular MT task to enhance the non-English MT system with tri-language parallel data. The provided parallel data are Russian-Chinese (direct), Russian-English (indirect), and English-Chinese (indirect) data. This task aims to improve the quality of the Russian-to-Chinese MT system by exploiting the direct and indirect parallel re- sources. The direct parallel data is noisy data crawled from the web. To alleviate the issue, we conduct extensive experiments to find effective data filtering methods. With the empirical knowledge that the performance of bilingual MT is better than multi-lingual MT and related experiment results, we approach this task as bilingual MT, where the two indirect data are transformed to direct data. In addition, we use the Transformer, a robust translation model, as our baseline and integrate several techniques, averaging checkpoints, model ensemble, and re-ranking. Our final system provides a 12.7 BLEU points improvement over a baseline system on the WMT21 triangular MT development set. In the official evalua- tion of the test set, ours is ranked 2nd in terms of BLEU scores.

pdf
Enhancing Language Generation with Effective Checkpoints of Pre-trained Language Model
Jeonghyeok Park | Hai Zhao
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021