ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification

Bowen Wei, Ziwei Zhu


Abstract
In this work, we propose ProtoLens, a novel prototype-based model that provides fine-grained, sub-sentence level interpretability for text classification. ProtoLens uses a Prototype-aware Span Extraction module to identify relevant text spans associated with learned prototypes and a Prototype Alignment mechanism to ensure prototypes are semantically meaningful throughout training. By aligning the prototype embeddings with human-understandable examples, ProtoLens provides interpretable predictions while maintaining competitive accuracy. Extensive experiments demonstrate that ProtoLens outperforms both prototype-based and non-interpretable baselines on multiple text classification benchmarks. Code and data are available at https://github.com/weibowen555/ProtoLens.
Anthology ID:
2025.acl-long.226
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4503–4523
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.226/
DOI:
Bibkey:
Cite (ACL):
Bowen Wei and Ziwei Zhu. 2025. ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4503–4523, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
ProtoLens: Advancing Prototype Learning for Fine-Grained Interpretability in Text Classification (Wei & Zhu, ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.226.pdf