@inproceedings{ivanova-etal-2022-comparing,
    title = "Comparing Annotated Datasets for Named Entity Recognition in {E}nglish Literature",
    author = "Ivanova, Rositsa V.  and
      Kirrane, Sabrina  and
      van Erp, Marieke",
    editor = "Calzolari, Nicoletta  and
      B{\'e}chet, Fr{\'e}d{\'e}ric  and
      Blache, Philippe  and
      Choukri, Khalid  and
      Cieri, Christopher  and
      Declerck, Thierry  and
      Goggi, Sara  and
      Isahara, Hitoshi  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Mazo, H{\'e}l{\`e}ne  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference",
    month = jun,
    year = "2022",
    address = "Marseille, France",
    publisher = "European Language Resources Association",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.lrec-1.404/",
    pages = "3788--3797",
    abstract = "The growing interest in named entity recognition (NER) in various domains has led to the creation of different benchmark datasets, often with slightly different annotation guidelines. To better understand the different NER benchmark datasets for the domain of English literature and their impact on the evaluation of NER tools, we analyse two existing annotated datasets and create two additional gold standard datasets. Following on from this, we evaluate the performance of two NER tools, one domain-specific and one general-purpose NER tool, using the four gold standards, and analyse the sources for the differences in the measured performance. Our results show that the performance of the two tools varies significantly depending on the gold standard used for the individual evaluations."
}Markdown (Informal)
[Comparing Annotated Datasets for Named Entity Recognition in English Literature](https://preview.aclanthology.org/ingest-emnlp/2022.lrec-1.404/) (Ivanova et al., LREC 2022)
ACL