Abstract
The rise of the term “mechanistic interpretability” has accompanied increasing interest in understanding neural models—particularly language models. However, this jargon has also led to a fair amount of confusion. So, what does it mean to be mechanistic? We describe four uses of the term in interpretability research. The most narrow technical definition requires a claim of causality, while a broader technical definition allows for any exploration of a model’s internals. However, the term also has a narrow cultural definition describing a cultural movement. To understand this semantic drift, we present a history of the NLP interpretability community and the formation of the separate, parallel mechanistic interpretability community. Finally, we discuss the broad cultural definition—encompassing the entire field of interpretability—and why the traditional NLP interpretability community has come to embrace it. We argue that the polysemy of “mechanistic” is the product of a critical divide within the interpretability community.- Anthology ID:
- 2024.blackboxnlp-1.30
- Volume:
- Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, US
- Editors:
- Yonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, Hanjie Chen
- Venue:
- BlackboxNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 480–498
- Language:
- URL:
- https://aclanthology.org/2024.blackboxnlp-1.30
- DOI:
- 10.18653/v1/2024.blackboxnlp-1.30
- Cite (ACL):
- Naomi Saphra and Sarah Wiegreffe. 2024. Mechanistic?. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 480–498, Miami, Florida, US. Association for Computational Linguistics.
- Cite (Informal):
- Mechanistic? (Saphra & Wiegreffe, BlackboxNLP 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.blackboxnlp-1.30.pdf