Interpretable Multi-dataset Evaluation for Named Entity Recognition

Jinlan Fu, Pengfei Liu, Graham Neubig


Abstract
With the proliferation of models for natural language processing tasks, it is even harder to understand the differences between models and their relative merits. Simply looking at differences between holistic metrics such as accuracy, BLEU, or F1 does not tell us why or how particular methods perform differently and how diverse datasets influence the model design choices. In this paper, we present a general methodology for interpretable evaluation for the named entity recognition (NER) task. The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them, identifying the strengths and weaknesses of current systems. By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area: https://github.com/neulab/InterpretEval
Anthology ID:
2020.emnlp-main.489
Volume:
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)
Month:
November
Year:
2020
Address:
Online
Editors:
Bonnie Webber, Trevor Cohn, Yulan He, Yang Liu
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6058–6069
Language:
URL:
https://aclanthology.org/2020.emnlp-main.489
DOI:
10.18653/v1/2020.emnlp-main.489
Bibkey:
Cite (ACL):
Jinlan Fu, Pengfei Liu, and Graham Neubig. 2020. Interpretable Multi-dataset Evaluation for Named Entity Recognition. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6058–6069, Online. Association for Computational Linguistics.
Cite (Informal):
Interpretable Multi-dataset Evaluation for Named Entity Recognition (Fu et al., EMNLP 2020)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2020.emnlp-main.489.pdf
Optional supplementary material:
 2020.emnlp-main.489.OptionalSupplementaryMaterial.zip
Video:
 https://slideslive.com/38939382
Code
 neulab/InterpretEval +  additional community code
Data
WNUT 2016 NER