Enhancing the Comprehensibility of Text Explanations via Unsupervised Concept Discovery

Yifan Sun, Danding Wang, Qiang Sheng, Juan Cao, Jintao Li


Abstract
Concept-based explainable approaches have emerged as a promising method in explainable AI because they can interpret models in a way that aligns with human reasoning. However, their adaption in the text domain remains limited. Most existing methods rely on predefined concept annotations and cannot discover unseen concepts, while other methods that extract concepts without supervision often produce explanations that are not intuitively comprehensible to humans, potentially diminishing user trust. These methods fall short of discovering comprehensible concepts automatically. To address this issue, we propose ECO-Concept, an intrinsically interpretable framework to discover comprehensible concepts with no concept annotations. ECO-Concept first utilizes an object-centric architecture to extract semantic concepts automatically. Then the comprehensibility of the extracted concepts is evaluated by large language models. Finally, the evaluation result guides the subsequent model fine-tuning to obtain more understandable explanations using relatively comprehensible concepts. Experiments show that our method achieves superior performance across diverse tasks. Further concept evaluations validate that the concepts learned by ECO-Concept surpassed current counterparts in comprehensibility.
Anthology ID:
2025.findings-acl.758
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
14695–14713
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.758/
DOI:
10.18653/v1/2025.findings-acl.758
Bibkey:
Cite (ACL):
Yifan Sun, Danding Wang, Qiang Sheng, Juan Cao, and Jintao Li. 2025. Enhancing the Comprehensibility of Text Explanations via Unsupervised Concept Discovery. In Findings of the Association for Computational Linguistics: ACL 2025, pages 14695–14713, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Enhancing the Comprehensibility of Text Explanations via Unsupervised Concept Discovery (Sun et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.758.pdf