@inproceedings{yoo-etal-2022-detection,
    title = "Detection of Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation",
    author = "Yoo, KiYoon  and
      Kim, Jangho  and
      Jang, Jiho  and
      Kwak, Nojun",
    editor = "Muresan, Smaranda  and
      Nakov, Preslav  and
      Villavicencio, Aline",
    booktitle = "Findings of the Association for Computational Linguistics: ACL 2022",
    month = may,
    year = "2022",
    address = "Dublin, Ireland",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.findings-acl.289/",
    doi = "10.18653/v1/2022.findings-acl.289",
    pages = "3656--3672",
    abstract = "Word-level adversarial attacks have shown success in NLP models, drastically decreasing the performance of transformer-based models in recent years. As a countermeasure, adversarial defense has been explored, but relatively few efforts have been made to detect adversarial examples. However, detecting adversarial examples may be crucial for automated tasks (e.g. review sentiment analysis) that wish to amass information about a certain population and additionally be a step towards a robust defense system. To this end, we release a dataset for four popular attack methods on four datasets and four models to encourage further research in this field. Along with it, we propose a competitive baseline based on density estimation that has the highest auc on 29 out of 30 dataset-attack-model combinations. The source code is released (\url{https://github.com/bangawayoo/adversarial-examples-in-text-classification})."
}Markdown (Informal)
[Detection of Adversarial Examples in Text Classification: Benchmark and Baseline via Robust Density Estimation](https://preview.aclanthology.org/ingest-emnlp/2022.findings-acl.289/) (Yoo et al., Findings 2022)
ACL