Abstract
Neural sequence labelling approaches have achieved state of the art results in morphological tagging. We evaluate the efficacy of four standard sequence labelling models on Sanskrit, a morphologically rich, fusional Indian language. As its label space can theoretically contain more than 40,000 labels, systems that explicitly model the internal structure of a label are more suited for the task, because of their ability to generalise to labels not seen during training. We find that although some neural models perform better than others, one of the common causes for error for all of these models is mispredictions due to syncretism.- Anthology ID:
- 2020.sigmorphon-1.23
- Volume:
- Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
- Month:
- July
- Year:
- 2020
- Address:
- Online
- Editors:
- Garrett Nicolai, Kyle Gorman, Ryan Cotterell
- Venue:
- SIGMORPHON
- SIG:
- SIGMORPHON
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 198–203
- Language:
- URL:
- https://aclanthology.org/2020.sigmorphon-1.23
- DOI:
- 10.18653/v1/2020.sigmorphon-1.23
- Cite (ACL):
- Ashim Gupta, Amrith Krishna, Pawan Goyal, and Oliver Hellwig. 2020. Evaluating Neural Morphological Taggers for Sanskrit. In Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology, pages 198–203, Online. Association for Computational Linguistics.
- Cite (Informal):
- Evaluating Neural Morphological Taggers for Sanskrit (Gupta et al., SIGMORPHON 2020)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2020.sigmorphon-1.23.pdf
- Code
- ashim95/sanskrit-morphological-taggers