Tackling the Low-resource Challenge for Canonical Segmentation

Manuel Mager, Özlem Çetinoğlu, Katharina Kann


Abstract
Canonical morphological segmentation consists of dividing words into their standardized morphemes. Here, we are interested in approaches for the task when training data is limited. We compare model performance in a simulated low-resource setting for the high-resource languages German, English, and Indonesian to experiments on new datasets for the truly low-resource languages Popoluca and Tepehua. We explore two new models for the task, borrowing from the closely related area of morphological generation: an LSTM pointer-generator and a sequence-to-sequence model with hard monotonic attention trained with imitation learning. We find that, in the low-resource setting, the novel approaches out-perform existing ones on all languages by up to 11.4% accuracy. However, while accuracy in emulated low-resource scenarios is over 50% for all languages, for the truly low-resource languages Popoluca and Tepehua, our best model only obtains 37.4% and 28.4% accuracy, respectively. Thus, we conclude that canonical segmentation is still a challenging task for low-resource languages.
Anthology ID:
2020.emnlp-main.423
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
5237–5250
Language:
URL:
https://aclanthology.org/2020.emnlp-main.423
DOI:
10.18653/v1/2020.emnlp-main.423
Bibkey:
Cite (ACL):
Manuel Mager, Özlem Çetinoğlu, and Katharina Kann. 2020. Tackling the Low-resource Challenge for Canonical Segmentation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 5237–5250, Online. Association for Computational Linguistics.
Cite (Informal):
Tackling the Low-resource Challenge for Canonical Segmentation (Mager et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2020.emnlp-main.423.pdf
Video:
 https://slideslive.com/38939170