GenEOL: Harnessing the Generative Power of LLMs for Training-Free Sentence Embeddings

Raghuveer Thirukovalluru, Bhuwan Dhingra


Abstract
Training-free embedding methods directly leverage pretrained large language models (LLMs) to embed text, bypassing the costly and complex procedure of contrastive learning. Previous training-free embedding methods have mainly focused on optimizing embedding prompts and have overlooked the benefits of utilizing the generative abilities of LLMs. We propose a novel method, GenEOL, which uses LLMs to generate diverse transformations of a sentence that preserve its meaning, and aggregates the resulting embeddings of these transformations to enhance the overall sentence embedding. GenEOL significantly outperforms the existing training-free embedding methods by an average of 2.85 points across several LLMs on the sentence semantic text similarity (STS) benchmark. GenEOL also achieves notable gains in clustering, reranking, and pair-classification tasks from the MTEB benchmark. Additionally, GenEOL stabilizes representation quality across LLM layers and remains robust to perturbations of embedding prompts.
Anthology ID:
2025.findings-naacl.122
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2295–2308
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.122/
DOI:
Bibkey:
Cite (ACL):
Raghuveer Thirukovalluru and Bhuwan Dhingra. 2025. GenEOL: Harnessing the Generative Power of LLMs for Training-Free Sentence Embeddings. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 2295–2308, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
GenEOL: Harnessing the Generative Power of LLMs for Training-Free Sentence Embeddings (Thirukovalluru & Dhingra, Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.findings-naacl.122.pdf