Sanath Jayasena


2022

pdf
BERTifying Sinhala - A Comprehensive Analysis of Pre-trained Language Models for Sinhala Text Classification
Vinura Dhananjaya | Piyumal Demotte | Surangika Ranathunga | Sanath Jayasena
Proceedings of the Thirteenth Language Resources and Evaluation Conference

This research provides the first comprehensive analysis of the performance of pre-trained language models for Sinhala text classification. We test on a set of different Sinhala text classification tasks and our analysis shows that out of the pre-trained multilingual models that include Sinhala (XLM-R, LaBSE, and LASER), XLM-R is the best model by far for Sinhala text classification. We also pre-train two RoBERTa-based monolingual Sinhala models, which are far superior to the existing pre-trained language models for Sinhala. We show that when fine-tuned, these pre-trained language models set a very strong baseline for Sinhala text classification and are robust in situations where labeled data is insufficient for fine-tuning. We further provide a set of recommendations for using pre-trained models for Sinhala text classification. We also introduce new annotated datasets useful for future research in Sinhala text classification and publicly release our pre-trained models.

2020

pdf
Dialog policy optimization for low resource setting using Self-play and Reward based Sampling
Tharindu Madusanka | Durashi Langappuli | Thisara Welmilla | Uthayasanker Thayasivam | Sanath Jayasena
Proceedings of the 34th Pacific Asia Conference on Language, Information and Computation

2018

pdf
Improving domain-specific SMT for low-resourced languages using data from different domains
Fathima Farhath | Pranavan Theivendiram | Surangika Ranathunga | Sanath Jayasena | Gihan Dias
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf
Automatic Creation of a Sentence Aligned Sinhala-Tamil Parallel Corpus
Riyafa Abdul Hameed | Nadeeshani Pathirennehelage | Anusha Ihalapathirana | Maryam Ziyad Mohamed | Surangika Ranathunga | Sanath Jayasena | Gihan Dias | Sandareka Fernando
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)

A sentence aligned parallel corpus is an important prerequisite in statistical machine translation. However, manual creation of such a parallel corpus is time consuming, and requires experts fluent in both languages. Automatic creation of a sentence aligned parallel corpus using parallel text is the solution to this problem. In this paper, we present the first ever empirical evaluation carried out to identify the best method to automatically create a sentence aligned Sinhala-Tamil parallel corpus. Annual reports from Sri Lankan government institutions were used as the parallel text for aligning. Despite both Sinhala and Tamil being under-resourced languages, we were able to achieve an F-score value of 0.791 using a hybrid approach that makes use of a bilingual dictionary.

pdf
Comprehensive Part-Of-Speech Tag Set and SVM based POS Tagger for Sinhala
Sandareka Fernando | Surangika Ranathunga | Sanath Jayasena | Gihan Dias
Proceedings of the 6th Workshop on South and Southeast Asian Natural Language Processing (WSSANLP2016)

This paper presents a new comprehensive multi-level Part-Of-Speech tag set and a Support Vector Machine based Part-Of-Speech tagger for the Sinhala language. The currently available tag set for Sinhala has two limitations: the unavailability of tags to represent some word classes and the lack of tags to capture inflection based grammatical variations of words. The new tag set, presented in this paper overcomes both of these limitations. The accuracy of available Sinhala Part-Of-Speech taggers, which are based on Hidden Markov Models, still falls far behind state of the art. Our Support Vector Machine based tagger achieved an overall accuracy of 84.68% with 59.86% accuracy for unknown words and 87.12% for known words, when the test set contains 10% of unknown words.