Matthias Feurer
2026
promptolution: A Unified, Modular Framework for Prompt Optimization
Tom Zehle | Timo Heiß | Moritz Schlager | Matthias Aßenmacher | Matthias Feurer
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Tom Zehle | Timo Heiß | Moritz Schlager | Matthias Aßenmacher | Matthias Feurer
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 3: System Demonstrations)
Prompt optimization has become crucial for enhancing the performance of large language models (LLMs) across a broad range of tasks. Although many research papers demonstrate its effectiveness, practical adoption is hindered because existing implementations are often tied to unmaintained, isolated research codebases or require invasive integration into application frameworks. To address this, we introduce promptolution, a unified, modular open-source framework that provides all components required for prompt optimization within a single extensible system for both practitioners and researchers. It integrates multiple contemporary discrete prompt optimizers, supports systematic and reproducible benchmarking, and returns framework-agnostic prompt strings, enabling seamless integration into existing LLM pipelines while remaining agnostic to the underlying model implementation.
2025
In-Context Learning of Soft Nearest Neighbor Classifiers for Intelligible Tabular Machine Learning
Mykhailo Koshil | Matthias Feurer | Katharina Eggensperger
Proceedings of the 4th Table Representation Learning Workshop
Mykhailo Koshil | Matthias Feurer | Katharina Eggensperger
Proceedings of the 4th Table Representation Learning Workshop
With in-context learning foundation models like TabPFN excelling on small supervised tabular learning tasks, it has been argued that “boosted trees are not the best default choice when working with data in tables”. However, such foundation models are inherently black-box models that do not provide interpretable predictions. We introduce a novel learning task to train ICL models to act as a nearest neighbor algorithm, which enables intelligible inference and does not decrease performance empirically.