Abstract
We introduce the Dutch Model Benchmark: DUMB. The benchmark includes a diverse set of datasets for low-, medium- and high-resource tasks. The total set of nine tasks includes four tasks that were previously not available in Dutch. Instead of relying on a mean score across tasks, we propose Relative Error Reduction (RER), which compares the DUMB performance of language models to a strong baseline which can be referred to in the future even when assessing different sets of language models. Through a comparison of 14 pre-trained language models (mono- and multi-lingual, of varying sizes), we assess the internal consistency of the benchmark tasks, as well as the factors that likely enable high performance. Our results indicate that current Dutch monolingual models under-perform and suggest training larger Dutch models with other architectures and pre-training objectives. At present, the highest performance is achieved by DeBERTaV3 (large), XLM-R (large) and mDeBERTaV3 (base). In addition to highlighting best strategies for training larger Dutch models, DUMB will foster further research on Dutch. A public leaderboard is available at https://dumbench.nl.- Anthology ID:
- 2023.emnlp-main.447
- Volume:
- Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 7221–7241
- Language:
- URL:
- https://aclanthology.org/2023.emnlp-main.447
- DOI:
- 10.18653/v1/2023.emnlp-main.447
- Cite (ACL):
- Wietse de Vries, Martijn Wieling, and Malvina Nissim. 2023. DUMB: A Benchmark for Smart Evaluation of Dutch Models. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, pages 7221–7241, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- DUMB: A Benchmark for Smart Evaluation of Dutch Models (de Vries et al., EMNLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-5/2023.emnlp-main.447.pdf