Efficient Benchmarking of NLP APIs using Multi-armed Bandits

Gholamreza Haffari, Tuan Dung Tran, Mark Carman


Abstract
Comparing NLP systems to select the best one for a task of interest, such as named entity recognition, is critical for practitioners and researchers. A rigorous approach involves setting up a hypothesis testing scenario using the performance of the systems on query documents. However, often the hypothesis testing approach needs to send a lot of document queries to the systems, which can be problematic. In this paper, we present an effective alternative based on the multi-armed bandit (MAB). We propose a hierarchical generative model to represent the uncertainty in the performance measures of the competing systems, to be used by Thompson Sampling to solve the resulting MAB. Experimental results on both synthetic and real data show that our approach requires significantly fewer queries compared to the standard benchmarking technique to identify the best system according to F-measure.
Anthology ID:
E17-1039
Volume:
Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers
Month:
April
Year:
2017
Address:
Valencia, Spain
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
408–416
Language:
URL:
https://aclanthology.org/E17-1039
DOI:
Bibkey:
Cite (ACL):
Gholamreza Haffari, Tuan Dung Tran, and Mark Carman. 2017. Efficient Benchmarking of NLP APIs using Multi-armed Bandits. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 1, Long Papers, pages 408–416, Valencia, Spain. Association for Computational Linguistics.
Cite (Informal):
Efficient Benchmarking of NLP APIs using Multi-armed Bandits (Haffari et al., EACL 2017)
Copy Citation:
PDF:
https://preview.aclanthology.org/update-css-js/E17-1039.pdf