Dawid Wisniewski
2020
RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation
Michał Bień
|
Michał Gilski
|
Martyna Maciejewska
|
Wojciech Taisner
|
Dawid Wisniewski
|
Agnieszka Lawrynowicz
Proceedings of the 13th International Conference on Natural Language Generation
Semi-structured text generation is a non-trivial problem. Although last years have brought lots of improvements in natural language generation, thanks to the development of neural models trained on large scale datasets, these approaches still struggle with producing structured, context- and commonsense-aware texts. Moreover, it is not clear how to evaluate the quality of generated texts. To address these problems, we introduce RecipeNLG – a novel dataset of cooking recipes. We discuss the data collection process and the relation between the semi-structured texts and cooking recipes. We use the dataset to approach the problem of generating recipes. Finally, we make use of multiple metrics to evaluate the generated recipes.
Contract Discovery: Dataset and a Few-Shot Semantic Retrieval Challenge with Competitive Baselines
Łukasz Borchmann
|
Dawid Wisniewski
|
Andrzej Gretkowski
|
Izabela Kosmala
|
Dawid Jurkiewicz
|
Łukasz Szałkiewicz
|
Gabriela Pałka
|
Karol Kaczmarek
|
Agnieszka Kaliska
|
Filip Graliński
Findings of the Association for Computational Linguistics: EMNLP 2020
We propose a new shared task of semantic retrieval from legal texts, in which a so-called contract discovery is to be performed – where legal clauses are extracted from documents, given a few examples of similar clauses from other legal acts. The task differs substantially from conventional NLI and shared tasks on legal information extraction (e.g., one has to identify text span instead of a single document, page, or paragraph). The specification of the proposed task is followed by an evaluation of multiple solutions within the unified framework proposed for this branch of methods. It is shown that state-of-the-art pretrained encoders fail to provide satisfactory results on the task proposed. In contrast, Language Model-based solutions perform better, especially when unsupervised fine-tuning is applied. Besides the ablation studies, we addressed questions regarding detection accuracy for relevant text fragments depending on the number of examples available. In addition to the dataset and reference results, LMs specialized in the legal domain were made publicly available.