Michał Pietruszka


2022

pdf
Sparsifying Transformer Models with Trainable Representation Pooling
Michał Pietruszka | Łukasz Borchmann | Łukasz Garncarek
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-k operator.Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling we can retain its top quality, while being 1.8× faster during training, 4.5× faster during inference, and up to 13× more computationally efficient in the decoder.

2020

pdf
From Dataset Recycling to Multi-Property Extraction and Beyond
Tomasz Dwojak | Michał Pietruszka | Łukasz Borchmann | Jakub Chłędowski | Filip Graliński
Proceedings of the 24th Conference on Computational Natural Language Learning

This paper investigates various Transformer architectures on the WikiReading Information Extraction and Machine Reading Comprehension dataset. The proposed dual-source model outperforms the current state-of-the-art by a large margin. Next, we introduce WikiReading Recycled - a newly developed public dataset, and the task of multiple-property extraction. It uses the same data as WikiReading but does not inherit its predecessor’s identified disadvantages. In addition, we provide a human-annotated test set with diagnostic subsets for a detailed analysis of model performance.