Bridging the Faithfulness Gap in Prototypical Models
Andrew Koulogeorge, Sean Xie, Saeed Hassanpour, Soroush Vosoughi
Abstract
Prototypical Network-based Language Models (PNLMs) have been introduced as a novel approach for enhancing interpretability in deep learning models for NLP. In this work, we show that, despite the transparency afforded by their case-based reasoning architecture, current PNLMs are, in fact, not faithful, i.e. their explanations do not accurately reflect the underlying model’s reasoning process. By adopting an axiomatic approach grounded in the seminal works’ definition of faithfulness, we identify two specific points in the architecture of PNLMs where unfaithfulness may occur. To address this, we introduce Faithful Alignment (FA), a two-part framework that ensures the faithfulness of PNLMs’ explanations. We then demonstrate that FA achieves this goal without compromising model performance across a variety of downstream tasks and ablation studies.- Anthology ID:
- 2025.insights-1.9
- Volume:
- The Sixth Workshop on Insights from Negative Results in NLP
- Month:
- May
- Year:
- 2025
- Address:
- Albuquerque, New Mexico
- Editors:
- Aleksandr Drozd, João Sedoc, Shabnam Tafreshi, Arjun Akula, Raphael Shu
- Venues:
- insights | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 86–99
- Language:
- URL:
- https://preview.aclanthology.org/fix-sig-urls/2025.insights-1.9/
- DOI:
- Cite (ACL):
- Andrew Koulogeorge, Sean Xie, Saeed Hassanpour, and Soroush Vosoughi. 2025. Bridging the Faithfulness Gap in Prototypical Models. In The Sixth Workshop on Insights from Negative Results in NLP, pages 86–99, Albuquerque, New Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Bridging the Faithfulness Gap in Prototypical Models (Koulogeorge et al., insights 2025)
- PDF:
- https://preview.aclanthology.org/fix-sig-urls/2025.insights-1.9.pdf