Aleksandra Chrabrowa
2023
Going beyond research datasets: Novel intent discovery in the industry setting
Aleksandra Chrabrowa
|
Tsimur Hadeliya
|
Dariusz Kajtoch
|
Robert Mroczkowski
|
Piotr Rybak
Findings of the Association for Computational Linguistics: EACL 2023
Novel intent discovery automates the process of grouping similar messages (questions) to identify previously unknown intents. However, current research focuses on publicly available datasets which have only the question field and significantly differ from real-life datasets. This paper proposes methods to improve the intent discovery pipeline deployed in a large e-commerce platform. We show the benefit of pre-training language models on in-domain data: both self-supervised and with weak supervision. We also devise the best method to utilize the conversational structure (i.e., question and answer) of real-life datasets during fine-tuning for clustering tasks, which we call Conv. All our methods combined to fully utilize real-life datasets give up to 33pp performance boost over state-of-the-art Constrained Deep Adaptive Clustering (CDAC) model for question only. By comparison CDAC model for the question data only gives only up to 13pp performance boost over the naive baseline.
2022
Evaluation of Transfer Learning for Polish with a Text-to-Text Model
Aleksandra Chrabrowa
|
Łukasz Dragan
|
Karol Grzegorczyk
|
Dariusz Kajtoch
|
Mikołaj Koszowski
|
Robert Mroczkowski
|
Piotr Rybak
Proceedings of the Thirteenth Language Resources and Evaluation Conference
We introduce a new benchmark for assessing the quality of text-to-text models for Polish. The benchmark consists of diverse tasks and datasets: KLEJ benchmark adapted for text-to-text, en-pl translation, summarization, and question answering. In particular, since summarization and question answering lack benchmark datasets for the Polish language, we describe in detail their construction and make them publicly available. Additionally, we present plT5 - a general-purpose text-to-text model for Polish that can be fine-tuned on various Natural Language Processing (NLP) tasks with a single training objective. Unsupervised denoising pre-training is performed efficiently by initializing the model weights with a multi-lingual T5 (mT5) counterpart. We evaluate the performance of plT5, mT5, Polish BART (plBART), and Polish GPT-2 (papuGaPT2). The plT5 scores top on all of these tasks except summarization, where plBART is best. In general (except summarization), the larger the model, the better the results. The encoder-decoder architectures prove to be better than the decoder-only equivalent.
Search
Co-authors
- Dariusz Kajtoch 2
- Robert Mroczkowski 2
- Piotr Rybak 2
- Tsimur Hadeliya 1
- Łukasz Dragan 1
- show all...