Łukasz Dragan
2022
Evaluation of Transfer Learning for Polish with a Text-to-Text Model
Aleksandra Chrabrowa
|
Łukasz Dragan
|
Karol Grzegorczyk
|
Dariusz Kajtoch
|
Mikołaj Koszowski
|
Robert Mroczkowski
|
Piotr Rybak
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We introduce a new benchmark for assessing the quality of text-to-text models for Polish. The benchmark consists of diverse tasks and datasets: KLEJ benchmark adapted for text-to-text, en-pl translation, summarization, and question answering. In particular, since summarization and question answering lack benchmark datasets for the Polish language, we describe in detail their construction and make them publicly available. Additionally, we present plT5 - a general-purpose text-to-text model for Polish that can be fine-tuned on various Natural Language Processing (NLP) tasks with a single training objective. Unsupervised denoising pre-training is performed efficiently by initializing the model weights with a multi-lingual T5 (mT5) counterpart. We evaluate the performance of plT5, mT5, Polish BART (plBART), and Polish GPT-2 (papuGaPT2). The plT5 scores top on all of these tasks except summarization, where plBART is best. In general (except summarization), the larger the model, the better the results. The encoder-decoder architectures prove to be better than the decoder-only equivalent.