@inproceedings{khushbu-etal-2023-ushoshi2023,
    title = "Ushoshi2023 at {BLP}-2023 Task 2: A Comparison of Traditional to Advanced Linguistic Models to Analyze Sentiment in {B}angla Texts",
    author = "Khushbu, Sharun  and
      Nur, Nasheen  and
      Ahmed, Mohiuddin  and
      Nur, Nashtarin",
    editor = "Alam, Firoj  and
      Kar, Sudipta  and
      Chowdhury, Shammur Absar  and
      Sadeque, Farig  and
      Amin, Ruhul",
    booktitle = "Proceedings of the First Workshop on Bangla Language Processing (BLP-2023)",
    month = dec,
    year = "2023",
    address = "Singapore",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.banglalp-1.38/",
    doi = "10.18653/v1/2023.banglalp-1.38",
    pages = "293--299",
    abstract = "This article describes our analytical approach designed for BLP Workshop-2023 Task-2: in Sentiment Analysis. During actual task submission, we used DistilBERT. However, we later applied rigorous hyperparameter tuning and pre-processing, improving the result to 68{\%} accuracy and a 68{\%} F1 micro score with vanilla LSTM. Traditional machine learning models were applied to compare the result where 75{\%} accuracy was achieved with traditional SVM. Our contributions are a) data augmentation using the oversampling method to remove data imbalance and b) attention masking for data encoding with masked language modeling to capture representations of language semantics effectively, by further demonstrating it with explainable AI. Originally, our system scored 0.26 micro-F1 in the competition and ranked 30th among the participants for a basic DistilBERT model, which we later improved to 0.68 and 0.65 with LSTM and XLM-RoBERTa-base models, respectively."
}Markdown (Informal)
[Ushoshi2023 at BLP-2023 Task 2: A Comparison of Traditional to Advanced Linguistic Models to Analyze Sentiment in Bangla Texts](https://preview.aclanthology.org/ingest-emnlp/2023.banglalp-1.38/) (Khushbu et al., BanglaLP 2023)
ACL