Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing

Sabri Boughorbel, Fahim Dalvi, Nadir Durrani, Majd Hawasly


Abstract
As fine-tuning becomes the dominant paradigm for improving large language models (LLMs), understanding what changes during this process is increasingly important. Traditional benchmarking often fails to explain _why_ one model outperforms another. In this work, we use model diffing, a mechanistic interpretability approach, to analyze the specific capability differences between Gemma-2-9b-it and a SimPO-enhanced variant. Using crosscoders, we identify and categorize latent representations that differentiate the two models. We find that SimPO acquired latent concepts predominantly enhance safety mechanisms (+32.8%), multilingual capabilities (+43.8%), and instruction-following (+151.7%), while its additional training also reduces emphasis on model self-reference (-44.1%) and hallucination management (-68.5%). Our analysis shows that model diffing can yield fine-grained insights beyond leaderboard metrics, attributing performance gaps to concrete mechanistic capabilities. This approach offers a transparent and targeted framework for comparing LLMs.
Anthology ID:
2025.emnlp-main.1598
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
31348–31359
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1598/
DOI:
Bibkey:
Cite (ACL):
Sabri Boughorbel, Fahim Dalvi, Nadir Durrani, and Majd Hawasly. 2025. Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 31348–31359, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Beyond the Leaderboard: Understanding Performance Disparities in Large Language Models via Model Diffing (Boughorbel et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1598.pdf
Checklist:
 2025.emnlp-main.1598.checklist.pdf