Revathy V. R

Also published as: Revathy V R


2025

pdf bib
Leveraging LLaMa for Abstractive Text Summarisation in Malayalam: An Experimental Study
Hristo Tanev | Anitha S. Pillai | Revathy V. R
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era

Recent years witnessed tremendous advancements in natural language processing (NLP) because of the development of complex language models that have automated several NLP applications, including text summarisation. Despite this progress, Malayalam text summarisation still faces challenges because of the peculiarities of the language. This research paper explores the potential of using a large language model, specifically the LLaMA (Large Language Model Meta AI) framework, for text summarisation of Malayalam language. In order to assess the performance of LLaMA for text summarization, for the low-resource language Malayalam, a dataset was curated with reference text and summaries. The evaluation showed that the LLaMA model could effectively summarize lengthy articles while maintaining important information and coherence. The generated summaries were compared with the reference summaries generated by human writers to observe how well aligned the model was with a human level of summarisation. The results proved that LLM can deal with the Malayalam text summarisation task, but more research is needed to understand the most relevant training strategy.

pdf bib
OdiaGenAI participation at WAT 2025
Debasish Dhal | Sambit Sekhar | Revathy V R | Shantipriya Parida | Akash Kumar Dhaka
Proceedings of the Twelfth Workshop on Asian Translation (WAT 2025)

We at ODIAGEN, provide a detailed description of the model, training procedure, results and conclusion of our submission to the Workshop on Asian Translation (WAT 2025). For this year, we focus only on text to text translation tasks on low resource Indic languages targetting Hindi, Bengali, Malayalam and Odia languages specifically. The system uses a large language model NLLB-200 finetuned on large datasets consisting of over 100K rows for each targetted language. The whole training dataset is made of the data provided by the organisers as in previous years and augmented by a much larger 100K sentences of data subsampled from the Samanantar dataset provided by AI4Bharat. From a total of eight evaluation/challenge tests, our approach obtained the highest BLEU scores yet, since the conception on five.