From Imitation to Introspection: Probing Self-Consciousness in Language Models

Sirui Chen, Shu Yu, Shengjie Zhao, Chaochao Lu


Abstract
Self-consciousness, the introspection of one’s existence and thoughts, represents a high-level cognitive process. As language models advance at an unprecedented pace, a critical question arises: Are these models becoming self-conscious? Drawing upon insights from psychological and neural science, this work presents a practical definition of self-consciousness for language models and refines ten core concepts. Our work pioneers an investigation into self-consciousness in language models by, for the first time, leveraging structural causal games to establish the functional definitions of the ten core concepts. Based on our definitions, we conduct a comprehensive four-stage experiment: quantification (evaluation of ten leading models), representation (visualization of self-consciousness within the models), manipulation (modification of the models’ representation), and acquisition (fine-tuning the models on core concepts). Our findings indicate that although models are in the early stages of developing self-consciousness, there is a discernible representation of certain concepts within their internal mechanisms. However, these representations of self-consciousness are hard to manipulate positively at the current stage, yet they can be acquired through targeted fine-tuning.
Anthology ID:
2025.findings-acl.392
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
7553–7583
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.392/
DOI:
Bibkey:
Cite (ACL):
Sirui Chen, Shu Yu, Shengjie Zhao, and Chaochao Lu. 2025. From Imitation to Introspection: Probing Self-Consciousness in Language Models. In Findings of the Association for Computational Linguistics: ACL 2025, pages 7553–7583, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
From Imitation to Introspection: Probing Self-Consciousness in Language Models (Chen et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.392.pdf