@inproceedings{manukonda-kodali-2025-bytesizedllm-dravidianlangtech-2025-sentiment,
    title = "byte{S}ized{LLM}@{D}ravidian{L}ang{T}ech 2025: Sentiment Analysis in {T}amil Using Transliteration-Aware {XLM}-{R}o{BERT}a and Attention-{B}i{LSTM}",
    author = "Manukonda, Durga Prasad  and
      Kodali, Rohith Gowtham",
    editor = "Chakravarthi, Bharathi Raja  and
      Priyadharshini, Ruba  and
      Madasamy, Anand Kumar  and
      Thavareesan, Sajeetha  and
      Sherly, Elizabeth  and
      Rajiakodi, Saranya  and
      Palani, Balasubramanian  and
      Subramanian, Malliga  and
      Cn, Subalalitha  and
      Chinnappa, Dhivya",
    booktitle = "Proceedings of the Fifth Workshop on Speech, Vision, and Language Technologies for Dravidian Languages",
    month = may,
    year = "2025",
    address = "Acoma, The Albuquerque Convention Center, Albuquerque, New Mexico",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2025.dravidianlangtech-1.16/",
    doi = "10.18653/v1/2025.dravidianlangtech-1.16",
    pages = "92--97",
    ISBN = "979-8-89176-228-2",
    abstract = "This study investigates sentiment analysis in code-mixed Tamil-English text using an Attention BiLSTM-XLM-RoBERTa model, combining multilingual embeddings with sequential context modeling to enhance classification performance. The model was fine-tuned using masked language modeling and trained with an attention-based BiLSTM classifier to capture sentiment patterns in transliterated and informal text. Despite computational constraints limiting pretraining, the approach achieved a Macro f1 of 0.5036 and ranked first in the competition. The model performed best on the Positive class, while Mixed Feelings and Unknown State showed lower recall due to class imbalance and ambiguity. Error analysis reveals challenges in handling non-standard transliterations, sentiment shifts, and informal language variations in social media text. These findings demonstrate the effectiveness of transformer-based multilingual embeddings and sequential modeling for sentiment classification in code-mixed text."
}Markdown (Informal)
[byteSizedLLM@DravidianLangTech 2025: Sentiment Analysis in Tamil Using Transliteration-Aware XLM-RoBERTa and Attention-BiLSTM](https://preview.aclanthology.org/ingest-emnlp/2025.dravidianlangtech-1.16/) (Manukonda & Kodali, DravidianLangTech 2025)
ACL