Laxmidhar Behera


2021

pdf bib
A Little Pretraining Goes a Long Way: A Case Study on Dependency Parsing Task for Low-resource Morphologically Rich Languages
Jivnesh Sandhan | Amrith Krishna | Ashim Gupta | Laxmidhar Behera | Pawan Goyal
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Student Research Workshop

Neural dependency parsing has achieved remarkable performance for many domains and languages. The bottleneck of massive labelled data limits the effectiveness of these approaches for low resource languages. In this work, we focus on dependency parsing for morphological rich languages (MRLs) in a low-resource setting. Although morphological information is essential for the dependency parsing task, the morphological disambiguation and lack of powerful analyzers pose challenges to get this information for MRLs. To address these challenges, we propose simple auxiliary tasks for pretraining. We perform experiments on 10 MRLs in low-resource settings to measure the efficacy of our proposed pretraining method and observe an average absolute gain of 2 points (UAS) and 3.6 points (LAS).

2019

pdf bib
Revisiting the Role of Feature Engineering for Compound Type Identification in Sanskrit
Jivnesh Sandhan | Amrith Krishna | Pawan Goyal | Laxmidhar Behera
Proceedings of the 6th International Sanskrit Computational Linguistics Symposium