Abstract
South and North Korea both use the Korean language. However, Korean NLP research has focused on South Korean only, and existing NLP systems of the Korean language, such as neural machine translation (NMT) models, cannot properly handle North Korean inputs. Training a model using North Korean data is the most straightforward approach to solving this problem, but there is insufficient data to train NMT models. In this study, we create data for North Korean NMT models using a comparable corpus. First, we manually create evaluation data for automatic alignment and machine translation, and then, investigate automatic alignment methods suitable for North Korean. Finally, we show that a model trained by North Korean bilingual data without human annotation significantly boosts North Korean translation accuracy compared to existing South Korean models in zero-shot settings.- Anthology ID:
- 2022.lrec-1.722
- Volume:
- Proceedings of the Thirteenth Language Resources and Evaluation Conference
- Month:
- June
- Year:
- 2022
- Address:
- Marseille, France
- Venue:
- LREC
- SIG:
- Publisher:
- European Language Resources Association
- Note:
- Pages:
- 6711–6718
- Language:
- URL:
- https://aclanthology.org/2022.lrec-1.722
- DOI:
- Cite (ACL):
- Hwichan Kim, Sangwhan Moon, Naoaki Okazaki, and Mamoru Komachi. 2022. Learning How to Translate North Korean through South Korean. In Proceedings of the Thirteenth Language Resources and Evaluation Conference, pages 6711–6718, Marseille, France. European Language Resources Association.
- Cite (Informal):
- Learning How to Translate North Korean through South Korean (Kim et al., LREC 2022)
- PDF:
- https://preview.aclanthology.org/starsem-semeval-split/2022.lrec-1.722.pdf