@inproceedings{srinivasan-vajjala-2023-multilingual,
    title = "A Multilingual Evaluation of {NER} Robustness to Adversarial Inputs",
    author = "Srinivasan, Akshay  and
      Vajjala, Sowmya",
    editor = "Can, Burcu  and
      Mozes, Maximilian  and
      Cahyawijaya, Samuel  and
      Saphra, Naomi  and
      Kassner, Nora  and
      Ravfogel, Shauli  and
      Ravichander, Abhilasha  and
      Zhao, Chen  and
      Augenstein, Isabelle  and
      Rogers, Anna  and
      Cho, Kyunghyun  and
      Grefenstette, Edward  and
      Voita, Lena",
    booktitle = "Proceedings of the 8th Workshop on Representation Learning for NLP (RepL4NLP 2023)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2023.repl4nlp-1.4/",
    doi = "10.18653/v1/2023.repl4nlp-1.4",
    pages = "40--53",
    abstract = "Adversarial evaluations of language models typically focus on English alone. In this paper, we performed a multilingual evaluation of Named Entity Recognition (NER) in terms of its robustness to small perturbations in the input. Our results showed the NER models we explored across three languages (English, German and Hindi) are not very robust to such changes, as indicated by the fluctuations in the overall F1 score as well as in a more fine-grained evaluation. With that knowledge, we further explored whether it is possible to improve the existing NER models using a part of the generated adversarial data sets as augmented training data to train a new NER model or as fine-tuning data to adapt an existing NER model. Our results showed that both these approaches improve performance on the original as well as adversarial test sets. While there is no significant difference between the two approaches for English, re-training is significantly better than fine-tuning for German and Hindi."
}Markdown (Informal)
[A Multilingual Evaluation of NER Robustness to Adversarial Inputs](https://preview.aclanthology.org/ingest-emnlp/2023.repl4nlp-1.4/) (Srinivasan & Vajjala, RepL4NLP 2023)
ACL