Language Models Can be Efficiently Steered via Minimal Embedding Layer Transformations

Diogo Tavares, David Semedo, Alexander Rudnicky, Joao Magalhaes


Abstract
Large Language Models (LLMs) are increasingly costly to fine-tune due to their size, with embedding layers alone accounting for up to 20% of model parameters. While Parameter-Efficient Fine-Tuning (PEFT) methods exist, they largely overlook the embedding layer. In this paper, we introduce TinyTE, a novel PEFT approach that steers model behavior via minimal translational transformations in the embedding space. TinyTE modifies input embeddings without altering hidden layers, achieving competitive performance while requiring approximately 0.0001% of the parameters needed for full fine-tuning. Experiments across architectures provide a new lens for understanding the relationship between input representations and model behavior—revealing them to be more flexible at their foundation than previously thought.
Anthology ID:
2025.emnlp-main.1170
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
22960–22978
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1170/
DOI:
Bibkey:
Cite (ACL):
Diogo Tavares, David Semedo, Alexander Rudnicky, and Joao Magalhaes. 2025. Language Models Can be Efficiently Steered via Minimal Embedding Layer Transformations. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 22960–22978, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Language Models Can be Efficiently Steered via Minimal Embedding Layer Transformations (Tavares et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1170.pdf
Checklist:
 2025.emnlp-main.1170.checklist.pdf