Michael A. Lepori
2025
Pixels Versus Priors: Controlling Knowledge Priors in Vision-Language Models through Visual Counterfacts
Michal Golovanevsky
|
William Rudman
|
Michael A. Lepori
|
Amir Bar
|
Ritambhara Singh
|
Carsten Eickhoff
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Multimodal Large Language Models (MLLMs) perform well on tasks such as visual question answering, but it remains unclear whether their reasoning relies more on memorized world knowledge or on the visual information present in the input image. To investigate this, we introduce Visual CounterFact, a new dataset of visually-realistic counterfactuals that put world knowledge priors (e.g, red strawberry) into direct conflict with visual input (e.g, blue strawberry). Using Visual CounterFact, we show that model predictions initially reflect memorized priors, but shift toward visual evidence in mid-to-late layers. This dynamic reveals a competition between the two modalities, with visual input ultimately overriding priors during evaluation. To control this behavior, we propose Pixels Versus Priors (PvP) steering vectors, a mechanism for controlling model outputs toward either world knowledge or visual input through activation-level interventions. On average, PvP successfully shifts 99.3% of color and 80.8% of size predictions from priors to counterfactuals. Together, these findings offer new tools for interpreting and controlling factual behavior in multimodal models.
Racing Thoughts: Explaining Contextualization Errors in Large Language Models
Michael A. Lepori
|
Michael Curtis Mozer
|
Asma Ghandeharioun
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
The profound success of transformer-based language models can largely be attributed to their ability to integrate relevant contextual information from an input sequence in order to generate a response or complete a task. However, we know very little about the algorithms that a model employs to implement this capability, nor do we understand their failure modes. For example, given the prompt “John is going fishing, so he walks over to the bank. Can he make an ATM transaction?”, a model may incorrectly respond “Yes” if it has not properly contextualized “bank” as a geographical feature, rather than a financial institution. We propose the LLM Race Conditions Hypothesis as an explanation of contextualization errors of this form. This hypothesis identifies dependencies between tokens (e.g., “bank” must be properly contextualized before the final token, "?", integrates information from “bank”), and claims that contextualization errors are a result of violating these dependencies. Using a variety of techniques from mechanistic interpretability, we provide correlational and causal evidence in support of the hypothesis and suggest inference-time interventions to address it.
Search
Fix author
Co-authors
- Amir Bar 1
- Carsten Eickhoff 1
- Asma Ghandeharioun 1
- Michal Golovanevsky 1
- Michael Curtis Mozer 1
- show all...