Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based SMT
Anoop Kunchukuttan, Maulik Shah, Pradyot Prakash, Pushpak Bhattacharyya
Abstract
We investigate pivot-based translation between related languages in a low resource, phrase-based SMT setting. We show that a subword-level pivot-based SMT model using a related pivot language is substantially better than word and morpheme-level pivot models. It is also highly competitive with the best direct translation model, which is encouraging as no direct source-target training corpus is used. We also show that combining multiple related language pivot models can rival a direct translation model. Thus, the use of subwords as translation units coupled with multiple related pivot languages can compensate for the lack of a direct parallel corpus.- Anthology ID:
- I17-2048
- Volume:
- Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers)
- Month:
- November
- Year:
- 2017
- Address:
- Taipei, Taiwan
- Editors:
- Greg Kondrak, Taro Watanabe
- Venue:
- IJCNLP
- SIG:
- Publisher:
- Asian Federation of Natural Language Processing
- Note:
- Pages:
- 283–289
- Language:
- URL:
- https://aclanthology.org/I17-2048
- DOI:
- Cite (ACL):
- Anoop Kunchukuttan, Maulik Shah, Pradyot Prakash, and Pushpak Bhattacharyya. 2017. Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based SMT. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 283–289, Taipei, Taiwan. Asian Federation of Natural Language Processing.
- Cite (Informal):
- Utilizing Lexical Similarity between Related, Low-resource Languages for Pivot-based SMT (Kunchukuttan et al., IJCNLP 2017)
- PDF:
- https://preview.aclanthology.org/teach-a-man-to-fish/I17-2048.pdf