@inproceedings{wiriyathammabhum-2022-tedb,
    title = "{TEDB} System Description to a Shared Task on Euphemism Detection 2022",
    author = "Wiriyathammabhum, Peratham",
    editor = "Ghosh, Debanjan  and
      Beigman Klebanov, Beata  and
      Muresan, Smaranda  and
      Feldman, Anna  and
      Poria, Soujanya  and
      Chakrabarty, Tuhin",
    booktitle = "Proceedings of the 3rd Workshop on Figurative Language Processing (FLP)",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates (Hybrid)",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.flp-1.1/",
    doi = "10.18653/v1/2022.flp-1.1",
    pages = "1--7",
    abstract = "In this report, we describe our Transformers for euphemism detection baseline (TEDB) submissions to a shared task on euphemism detection 2022. We cast the task of predicting euphemism as text classification. We considered Transformer-based models which are the current state-of-the-art methods for text classification. We explored different training schemes, pretrained models, and model architectures. Our best result of 0.816 F1-score (0.818 precision and 0.814 recall) consists of a euphemism-detection-finetuned TweetEval/TimeLMs-pretrained RoBERTa model as a feature extractor frontend with a KimCNN classifier backend trained end-to-end using a cosine annealing scheduler. We observed pretrained models on sentiment analysis and offensiveness detection to correlate with more F1-score while pretraining on other tasks, such as sarcasm detection, produces less F1-scores. Also, putting more word vector channels does not improve the performance in our experiments."
}Markdown (Informal)
[TEDB System Description to a Shared Task on Euphemism Detection 2022](https://preview.aclanthology.org/ingest-emnlp/2022.flp-1.1/) (Wiriyathammabhum, Fig-Lang 2022)
ACL