Abstract
Transformer language models (LMs) have been shown to represent concepts as directions in the latent space of hidden activations. However, for any human-interpretable concept, how can we find its direction in the latent space? We present a technique called linear relational concepts (LRC) for finding concept directions corresponding to human-interpretable concepts by first modeling the relation between subject and object as a linear relational embedding (LRE). We find that inverting the LRE and using earlier object layers results in a powerful technique for finding concept directions that outperforms standard black-box probing classifiers. We evaluate LRCs on their performance as concept classifiers as well as their ability to causally change model output.- Anthology ID:
- 2024.naacl-long.85
- Volume:
- Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Kevin Duh, Helena Gomez, Steven Bethard
- Venue:
- NAACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1524–1535
- Language:
- URL:
- https://aclanthology.org/2024.naacl-long.85
- DOI:
- Cite (ACL):
- David Chanin, Anthony Hunter, and Oana-Maria Camburu. 2024. Identifying Linear Relational Concepts in Large Language Models. In Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 1524–1535, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Identifying Linear Relational Concepts in Large Language Models (Chanin et al., NAACL 2024)
- PDF:
- https://preview.aclanthology.org/fix-volume-bibkeys/2024.naacl-long.85.pdf