Abstract
Activation steering methods were shown to be effective in conditioning language model generation by additively intervening over models’ intermediate representations. However, the evaluation of these techniques has so far been limited to single conditioning properties and synthetic settings. In this work, we conduct a comprehensive evaluation of various activation steering strategies, highlighting the property-dependent nature of optimal parameters to ensure a robust effect throughout generation. To address this issue, we propose Dynamic Activation Composition, an information-theoretic approach to modulate the steering intensity of one or more properties throughout generation. Our experiments on multi-property steering show that our method successfully maintains high conditioning while minimizing the impact of conditioning on generation fluency.- Anthology ID:
- 2024.blackboxnlp-1.34
- Volume:
- Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, US
- Editors:
- Yonatan Belinkov, Najoung Kim, Jaap Jumelet, Hosein Mohebbi, Aaron Mueller, Hanjie Chen
- Venue:
- BlackboxNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 577–603
- Language:
- URL:
- https://aclanthology.org/2024.blackboxnlp-1.34
- DOI:
- 10.18653/v1/2024.blackboxnlp-1.34
- Cite (ACL):
- Daniel Scalena, Gabriele Sarti, and Malvina Nissim. 2024. Multi-property Steering of Large Language Models with Dynamic Activation Composition. In Proceedings of the 7th BlackboxNLP Workshop: Analyzing and Interpreting Neural Networks for NLP, pages 577–603, Miami, Florida, US. Association for Computational Linguistics.
- Cite (Informal):
- Multi-property Steering of Large Language Models with Dynamic Activation Composition (Scalena et al., BlackboxNLP 2024)
- PDF:
- https://preview.aclanthology.org/landing_page/2024.blackboxnlp-1.34.pdf