Karol Grzegorczyk
2022
Evaluation of Transfer Learning for Polish with a Text-to-Text Model
Aleksandra Chrabrowa
|
Łukasz Dragan
|
Karol Grzegorczyk
|
Dariusz Kajtoch
|
Mikołaj Koszowski
|
Robert Mroczkowski
|
Piotr Rybak
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We introduce a new benchmark for assessing the quality of text-to-text models for Polish. The benchmark consists of diverse tasks and datasets: KLEJ benchmark adapted for text-to-text, en-pl translation, summarization, and question answering. In particular, since summarization and question answering lack benchmark datasets for the Polish language, we describe in detail their construction and make them publicly available. Additionally, we present plT5 - a general-purpose text-to-text model for Polish that can be fine-tuned on various Natural Language Processing (NLP) tasks with a single training objective. Unsupervised denoising pre-training is performed efficiently by initializing the model weights with a multi-lingual T5 (mT5) counterpart. We evaluate the performance of plT5, mT5, Polish BART (plBART), and Polish GPT-2 (papuGaPT2). The plT5 scores top on all of these tasks except summarization, where plBART is best. In general (except summarization), the larger the model, the better the results. The encoder-decoder architectures prove to be better than the decoder-only equivalent.
2021
Allegro.eu Submission to WMT21 News Translation Task
Mikołaj Koszowski
|
Karol Grzegorczyk
|
Tsimur Hadeliya
Proceedings of the Sixth Conference on Machine Translation
We submitted two uni-directional models, one for English→Icelandic direction and other for Icelandic→English direction. Our news translation system is based on the transformer-big architecture, it makes use of corpora filtering, back-translation and forward translation applied to parallel and monolingual data alike
2018
Disambiguated skip-gram model
Karol Grzegorczyk
|
Marcin Kurdziel
Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing
We present disambiguated skip-gram: a neural-probabilistic model for learning multi-sense distributed representations of words. Disambiguated skip-gram jointly estimates a skip-gram-like context word prediction model and a word sense disambiguation model. Unlike previous probabilistic models for learning multi-sense word embeddings, disambiguated skip-gram is end-to-end differentiable and can be interpreted as a simple feed-forward neural network. We also introduce an effective pruning strategy for the embeddings learned by disambiguated skip-gram. This allows us to control the granularity of representations learned by our model. In experimental evaluation disambiguated skip-gram improves state-of-the are results in several word sense induction benchmarks.
2017
Binary Paragraph Vectors
Karol Grzegorczyk
|
Marcin Kurdziel
Proceedings of the 2nd Workshop on Representation Learning for NLP
Recently Le & Mikolov described two log-linear models, called Paragraph Vector, that can be used to learn state-of-the-art distributed representations of documents. Inspired by this work, we present Binary Paragraph Vector models: simple neural networks that learn short binary codes for fast information retrieval. We show that binary paragraph vectors outperform autoencoder-based binary codes, despite using fewer bits. We also evaluate their precision in transfer learning settings, where binary codes are inferred for documents unrelated to the training corpus. Results from these experiments indicate that binary paragraph vectors can capture semantics relevant for various domain-specific documents. Finally, we present a model that simultaneously learns short binary codes and longer, real-valued representations. This model can be used to rapidly retrieve a short list of highly relevant documents from a large document collection.
Search
Co-authors
- Marcin Kurdziel 2
- Mikołaj Koszowski 2
- Tsimur Hadeliya 1
- Aleksandra Chrabrowa 1
- Łukasz Dragan 1
- show all...