Large Language Models Are Overparameterized Text Encoders

Thennal D K, Tim Fischer, Chris Biemann


Abstract
Large language models (LLMs) demonstrate strong performance as text embedding models when finetuned with supervised contrastive training. However, their large size balloons inference time and memory requirements. In this paper, we show that by pruning the last % layers of an LLM before supervised training for only 1000 steps, we can achieve a proportional reduction in memory and inference time. We evaluate four different state-of-the-art LLMs on text embedding tasks and find that our method can prune up to 30% of layers with negligible impact on performance and up to 80% with only a modest drop. With only three lines of code, our method is easily implemented in any pipeline for transforming LLMs to text encoders. We also propose L3Prune, a novel layer-pruning strategy based on the model’s initial loss that provides two optimal pruning configurations: a large variant with negligible performance loss and a small variant for resource-constrained settings. On average, the large variant prunes 21% of the parameters with a performance drop, and the small variant only suffers from a decrease while pruning 74% of the model. We consider these results strong evidence that LLMs are overparameterized for text embedding tasks, and can be easily pruned.
Anthology ID:
2025.repl4nlp-1.13
Volume:
Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025)
Month:
May
Year:
2025
Address:
Albuquerque, NM
Editors:
Vaibhav Adlakha, Alexandra Chronopoulou, Xiang Lorraine Li, Bodhisattwa Prasad Majumder, Freda Shi, Giorgos Vernikos
Venues:
RepL4NLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
170–184
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.repl4nlp-1.13/
DOI:
Bibkey:
Cite (ACL):
Thennal D K, Tim Fischer, and Chris Biemann. 2025. Large Language Models Are Overparameterized Text Encoders. In Proceedings of the 10th Workshop on Representation Learning for NLP (RepL4NLP-2025), pages 170–184, Albuquerque, NM. Association for Computational Linguistics.
Cite (Informal):
Large Language Models Are Overparameterized Text Encoders (K et al., RepL4NLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.repl4nlp-1.13.pdf