Revanth Gundam


2025

pdf bib
Zero at SemEval-2025 Task 11: Multilingual Emotion Classification with BERT Variants: A Comparative Study
Revanth Gundam | Abhinav Marri | Radhika Mamidi
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

Emotion detection in text plays a very crucial role in NLP applications such as sentiment analysis and feedback analysis. This study tackles two tasks: multi-label emotion detection, where the goal is to classify text based on six emotions (joy, sadness, fear, anger, surprise, and disgust) in a multilingual setting, and emotion intensity prediction, which assigns an ordinal intensity score to each of the above-given emotions. Using the BRIGHTER dataset, a multilingual corpus spanning 28 languages, the paper addresses issues like class imbalances by treating each emotion as an independent binary classification problem. The paper first explores strategies such as static embeddings such as GloVe with logistic regression classifiers on top of it. To capture contextual nuances more effectively, we fine-tune transformer based models, such as BERT and RoBERTa. Our approach demonstrates the benefits of fine-tuning for improved emotion prediction, while also highlighting the challenges of multilingual and multi-label classification.

pdf bib
Zero at SemEval-2025 Task 2: Entity-Aware Machine Translation: Fine-Tuning NLLB for Improved Named Entity Translation
Revanth Gundam | Abhinav Marri | Advaith Malladi | Radhika Mamidi
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

Machine Translation (MT) is an essential tool for communication amongst people across different cultures, yet Named Entity (NE) translation remains a major challenge due to its rarity in occurrence and ambiguity. Traditional approaches, like using lexicons or parallel corpora, often fail to generalize to unseen entities, and hence do not perform well. To address this, we create a silver dataset using the Google Translate API and fine-tune the facebook/nllb200-distilled-600M model with LoRA (LowRank Adaptation) to enhance translation accuracy while also maintaining efficient memory use. Evaluated with metrics such as BLEU, COMET, and M-ETA, our results show that fine-tuning a specialized MT model improves NE translation without having to rely on largescale general-purpose models.