Zhiyi Ma
2021
Dynabench: Rethinking Benchmarking in NLP
Douwe Kiela
|
Max Bartolo
|
Yixin Nie
|
Divyansh Kaushik
|
Atticus Geiger
|
Zhengxuan Wu
|
Bertie Vidgen
|
Grusha Prasad
|
Amanpreet Singh
|
Pratik Ringshia
|
Zhiyi Ma
|
Tristan Thrush
|
Sebastian Riedel
|
Zeerak Waseem
|
Pontus Stenetorp
|
Robin Jia
|
Mohit Bansal
|
Christopher Potts
|
Adina Williams
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
We introduce Dynabench, an open-source platform for dynamic dataset creation and model benchmarking. Dynabench runs in a web browser and supports human-and-model-in-the-loop dataset creation: annotators seek to create examples that a target model will misclassify, but that another person will not. In this paper, we argue that Dynabench addresses a critical need in our community: contemporary models quickly achieve outstanding performance on benchmark tasks but nonetheless fail on simple challenge examples and falter in real-world scenarios. With Dynabench, dataset creation, model development, and model assessment can directly inform each other, leading to more robust and informative benchmarks. We report on four initial NLP tasks, illustrating these concepts and highlighting the promise of the platform, and address potential objections to dynamic benchmarking as a new standard for the field.
1996
Extracting Topics from Texts Based on Situations
Zhiyi Ma
|
Xuegong Zhan
|
Tianshun Yao
Proceedings of the 11th Pacific Asia Conference on Language, Information and Computation
Search
Co-authors
- Douwe Kiela 1
- Max Bartolo 1
- Yixin Nie 1
- Divyansh Kaushik 1
- Atticus Geiger 1
- show all...