KoBEST: Korean Balanced Evaluation of Significant Tasks

Myeongjun Jang, Dohyung Kim, Deuk Sin Kwon, Eric Davis


Abstract
A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. As modern language models (LMs) have become more elaborate and sophisticated, more difficult benchmarks that require linguistic knowledge and reasoning have been proposed. However, most of these benchmarks only support English, and great effort is necessary to construct benchmarks for other low resource languages. To this end, we propose a new benchmark named Korean balanced evaluation of significant tasks (KoBEST), which consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. Moreover, our data is purely annotated by humans and thoroughly reviewed to guarantee high data quality. We also provide baseline models and human performance results. Our dataset is available on the Huggingface.
Anthology ID:
2022.coling-1.325
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
3697–3708
Language:
URL:
https://aclanthology.org/2022.coling-1.325
DOI:
Bibkey:
Cite (ACL):
Myeongjun Jang, Dohyung Kim, Deuk Sin Kwon, and Eric Davis. 2022. KoBEST: Korean Balanced Evaluation of Significant Tasks. In Proceedings of the 29th International Conference on Computational Linguistics, pages 3697–3708, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
KoBEST: Korean Balanced Evaluation of Significant Tasks (Jang et al., COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.coling-1.325.pdf
Data
KobestBoolQCOPAGLUEHellaSwagSuperGLUEWiC