Connecting Concept Layers and Rationales to Enhance Language Model Interpretability

Thomas Bailleux, Tanmoy Mukherjee, Pierre Marquis, Zied Bouraoui


Abstract
With the introduction of large language models, NLP has undergone a paradigm shift where these models now serve as the backbone of most developed systems. However, while highly effective, they remain opaque and difficult to interpret, which limits their adoption in critical applications that require transparency and trust. Two major approaches aim to address this: rationale extraction, which highlights input spans that justify predictions, and concept bottleneck models, which make decisions through human-interpretable concepts. Yet each has limitations. Crucially, current models lack a unified framework that connects where a model looks (rationales) with why it makes a decision (concepts). We introduce CLARITY, a model that first selects key input spans, maps them to interpretable concepts, and then predicts using only those concepts. This design supports faithful, multi-level explanations and allows users to intervene at both the rationale and concept levels. CLARITY, achieves competitive accuracy while offering improved transparency and controllability.
Anthology ID:
2025.starsem-1.33
Volume:
Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025)
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Lea Frermann, Mark Stevenson
Venue:
*SEM
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
409–429
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.33/
DOI:
Bibkey:
Cite (ACL):
Thomas Bailleux, Tanmoy Mukherjee, Pierre Marquis, and Zied Bouraoui. 2025. Connecting Concept Layers and Rationales to Enhance Language Model Interpretability. In Proceedings of the 14th Joint Conference on Lexical and Computational Semantics (*SEM 2025), pages 409–429, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Connecting Concept Layers and Rationales to Enhance Language Model Interpretability (Bailleux et al., *SEM 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.starsem-1.33.pdf