Leonid Karlinsky


2024

pdf
Self-Specialization: Uncovering Latent Expertise within Large Language Models
Junmo Kang | Hongyin Luo | Yada Zhu | Jacob Hansen | James Glass | David Cox | Alan Ritter | Rogerio Feris | Leonid Karlinsky
Findings of the Association for Computational Linguistics ACL 2024

Recent works have demonstrated the effectiveness of self-alignment in which a large language model is aligned to follow general instructions using instructional data generated from the model itself starting from a handful of human-written seeds. Instead of general alignment, in this work, we focus on self-alignment for expert domain specialization (e.g., biomedicine, finance). As a preliminary, we quantitively show the marginal effect that generic instruction-following training has on downstream expert domains’ performance. To remedy this, we propose self-specialization - allowing for effective model specialization while achieving cross-task generalization by leveraging only a few labeled seeds. Self-specialization offers a data- and parameter-efficient way of “carving out” an expert model out of a generalist pre-trained LLM. Exploring a variety of popular open large models as a base for specialization, our experimental results in both biomedical and financial domains show that our self-specialized models outperform their base models by a large margin, and even larger models that are generally instruction-tuned or that have been adapted to the target domain by other means.

2023

pdf
FlowchartQA: The First Large-Scale Benchmark for Reasoning over Flowcharts
Simon Tannert | Marcelo G. Feighelstein | Jasmina Bogojeska | Joseph Shtok | Assaf Arbelle | Peter W. J. Staar | Anika Schumann | Jonas Kuhn | Leonid Karlinsky
Proceedings of the 1st Workshop on Linguistic Insights from and for Multimodal Language Processing

pdf
Incorporating Structured Representations into Pretrained Vision & Language Models Using Scene Graphs
Roei Herzig | Alon Mendelson | Leonid Karlinsky | Assaf Arbelle | Rogerio Feris | Trevor Darrell | Amir Globerson
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Vision and language models (VLMs) have demonstrated remarkable zero-shot (ZS) performance in a variety of tasks. However, recent works have shown that even the best VLMs struggle to capture aspects of compositional scene understanding, such as object attributes, relations, and action states. In contrast, obtaining structured annotations, such as scene graphs (SGs), that could improve these models is time-consuming and costly, and thus cannot be used on a large scale. Here we ask whether small SG datasets can provide sufficient information for enhancing structured understanding of pretrained VLMs. We show that it is indeed possible to improve VLMs when learning from SGs by integrating components that incorporate structured information into both visual and textual representations. For the visual side, we incorporate a special “SG Component” in the image transformer trained to predict SG information, while for the textual side, we utilize SGs to generate fine-grained captions that highlight different compositional aspects of the scene. Our method improves the performance of several popular VLMs on multiple VL datasets with only a mild degradation in ZS capabilities.