@inproceedings{pranesh-etal-2020-clplm,
    title = "{CLPLM}: Character Level Pretrained Language Model for {E}xtracting{S}upport Phrases for Sentiment Labels",
    author = "Pranesh, Raj  and
      Kumar, Sumit  and
      Shekhar, Ambesh",
    editor = "Bhattacharyya, Pushpak  and
      Sharma, Dipti Misra  and
      Sangal, Rajeev",
    booktitle = "Proceedings of the 17th International Conference on Natural Language Processing (ICON)",
    month = dec,
    year = "2020",
    address = "Indian Institute of Technology Patna, Patna, India",
    publisher = "NLP Association of India (NLPAI)",
    url = "https://preview.aclanthology.org/ingest-emnlp/2020.icon-main.64/",
    pages = "475--480",
    abstract = "In this paper, we have designed a character-level pre-trained language model for extracting support phrases from tweets based on the sentiment label. We also propose a character-level ensemble model designed by properly blending Pre-trained Contextual Embeddings (PCE) models- RoBERTa, BERT, and ALBERT along with Neural network models- RNN, CNN and WaveNet at different stages of the model. For a given tweet and associated sentiment label, our model predicts the span of phrases in a tweet that prompts the particular sentiment in the tweet. In our experiments, we have explored various model architectures and configuration for both single as well as ensemble models. We performed a systematic comparative analysis of all the model{'}s performance based on the Jaccard score obtained. The best performing ensemble model obtained the highest Jaccard scores of 73.5, giving it a relative improvement of 2.4{\%} over the best performing single RoBERTa based character-level model, at 71.5(Jaccard score)."
}Markdown (Informal)
[CLPLM: Character Level Pretrained Language Model for ExtractingSupport Phrases for Sentiment Labels](https://preview.aclanthology.org/ingest-emnlp/2020.icon-main.64/) (Pranesh et al., ICON 2020)
ACL