Łukasz Garncarek


2025

pdf bib
Arctic-TILT. Business Document Understanding at Sub-Billion Scale
Łukasz Borchmann | Michał Pietruszka | Wojciech Jaśkowski | Dawid Jurkiewicz | Piotr Halama | Paweł Józiak | Łukasz Garncarek | Paweł Liskowski | Karolina Szyndler | Andrzej Gretkowski | Julita Ołtusek | Gabriela Nowakowska | Artur Zawłocki | Łukasz Duhr | Paweł Dyda | Michał Turski
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 6: Industry Track)

The vast portion of workloads employing LLMs involves answering questions grounded on PDF or scanned content. We introduce the Arctic-TILT achieving accuracy on par with models 1000× its size on these use cases. It can be finetuned and deployed on a single 24GB GPU, lowering operational costs while processing rich documents with up to 400k tokens. The model establishes state-of-the-art results on seven diverse Document Understanding benchmarks, as well as provides reliable confidence scores and quick inference, essential for processing files in large-scale or time-sensitive enterprise environments. We release Arctic-TILT weights and an efficient vLLM-based implementation on a permissive license.

2024

pdf bib
STable: Table Generation Framework for Encoder-Decoder Models
Michał Pietruszka | Michał Turski | Łukasz Borchmann | Tomasz Dwojak | Gabriela Nowakowska | Karolina Szyndler | Dawid Jurkiewicz | Łukasz Garncarek
Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)

The output structure of database-like tables, consisting of values structured in horizontal rows and vertical columns identifiable by name, can cover a wide range of NLP tasks. Following this constatation, we propose a framework for text-to-table neural models applicable to problems such as extraction of line items, joint entity and relation extraction, or knowledge base population. The permutation-based decoder of our proposal is a generalized sequential method that comprehends information from all cells in the table. The training maximizes the expected log-likelihood for a table’s content across all random permutations of the factorization order. During the content inference, we exploit the model’s ability to generate cells in any order by searching over possible orderings to maximize the model’s confidence and avoid substantial error accumulation, which other sequential models are prone to. Experiments demonstrate a high practical value of the framework, which establishes state-of-the-art results on several challenging datasets, outperforming previous solutions by up to 15\\%.

2022

pdf bib
Sparsifying Transformer Models with Trainable Representation Pooling
Michał Pietruszka | Łukasz Borchmann | Łukasz Garncarek
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

We propose a novel method to sparsify attention in the Transformer model by learning to select the most-informative token representations during the training process, thus focusing on the task-specific parts of an input. A reduction of quadratic time and memory complexity to sublinear was achieved due to a robust trainable top-k operator.Our experiments on a challenging long document summarization task show that even our simple baseline performs comparably to the current SOTA, and with trainable pooling we can retain its top quality, while being 1.8× faster during training, 4.5× faster during inference, and up to 13× more computationally efficient in the decoder.