@inproceedings{kavatagi-etal-2023-vtubgm,
    title = "{VTUBGM}@{LT}-{EDI}-2023: Hope Speech Identification using Layered Differential Training of {ULMF}it",
    author = "Kavatagi, Sanjana M.  and
      Rachh, Rashmi R.  and
      Biradar, Shankar S.",
    editor = "Chakravarthi, Bharathi R.  and
      Bharathi, B.  and
      Griffith, Joephine  and
      Bali, Kalika  and
      Buitelaar, Paul",
    booktitle = "Proceedings of the Third Workshop on Language Technology for Equality, Diversity and Inclusion",
    month = sep,
    year = "2023",
    address = "Varna, Bulgaria",
    publisher = "INCOMA Ltd., Shoumen, Bulgaria",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.ltedi-1.32/",
    pages = "209--213",
    abstract = "Hope speech embodies optimistic and uplifting sentiments, aiming to inspire individuals to maintain faith in positive progress and actively contribute to a better future. In this article, we outline the model presented by our team, VTUBGM, for the shared task ``Hope Speech Detection for Equality, Diversity, and Inclusion'' at LT-EDI-RANLP 2023. This task entails classifying YouTube comments, which is a classification problem at the comment level. The task was conducted in four different languages: Bulgarian, English, Hindi, and Spanish. VTUBGM submitted a model developed through layered differential training of the ULMFit model. As a result, a macro F1 score of 0.48 was obtained and ranked 3rd in the competition."
}Markdown (Informal)
[VTUBGM@LT-EDI-2023: Hope Speech Identification using Layered Differential Training of ULMFit](https://preview.aclanthology.org/ingest-emnlp/2023.ltedi-1.32/) (Kavatagi et al., LTEDI 2023)
ACL