Matthew Lyle Olson
2025
Probing Semantic Routing in Large Mixture-of-Expert Models
Matthew Lyle Olson
|
Neale Ratzlaff
|
Musashi Hinck
|
Man Luo
|
Sungduk Yu
|
Chendi Xue
|
Vasudev Lal
Findings of the Association for Computational Linguistics: EMNLP 2025
In the past year, large (>100B parameter) mixture-of-expert (MoE) models have become increasingly common in the open domain. While their advantages are often framed in terms of efficiency, prior work has also explored functional differentiation through routing behavior. We investigate whether expert routing in large MoE models is influenced by the semantics of the inputs. To test this, we design two controlled experiments. First, we compare activations on sentence pairs with a shared target word used in the same or different senses. Second, we fix context and substitute the target word with semantically similar or dissimilar alternatives. Comparing expert overlap across these conditions reveals clear, statistically significant evidence of semantic routing in large MoE models.
2024
Why do LLaVA Vision-Language Models Reply to Images in English?
Musashi Hinck
|
Carolin Holtermann
|
Matthew Lyle Olson
|
Florian Schneider
|
Sungduk Yu
|
Anahita Bhiwandiwalla
|
Anne Lauscher
|
Shao-Yen Tseng
|
Vasudev Lal
Findings of the Association for Computational Linguistics: EMNLP 2024
We uncover a surprising multilingual bias occurring in a popular class of multimodal vision-language models (VLMs). Including an image in the query to a LLaVA-style VLM significantly increases the likelihood of the model returning an English response, regardless of the language of the query. This paper investigates the causes of this loss with a two-pronged approach that combines extensive ablation of the design space with a mechanistic analysis of the models’ internal representations of image and text inputs. Both approaches indicate that the issue stems in the language modeling component of the LLaVA model. Statistically, we find that switching the language backbone for a bilingual language model has the strongest effect on reducing this error. Mechanistically, we provide compelling evidence that visual inputs are not mapped to a similar space as text ones, and that intervening on intermediary attention layers can reduce this bias. Our findings provide important insights to researchers and engineers seeking to understand the crossover between multimodal and multilingual spaces, and contribute to the goal of developing capable and inclusive VLMs for non-English contexts.
Search
Fix author
Co-authors
- Musashi Hinck 2
- Vasudev Lal 2
- Sungduk Yu 2
- Anahita Bhiwandiwalla 1
- Carolin Holtermann 1
- show all...