Challenging Assumptions in Learning Generic Text Style Embeddings

Phil Ostheimer, Marius Kloft, Sophie Fellenz


Abstract
Recent advancements in language representation learning primarily emphasize language modeling for deriving meaningful representations, often neglecting style-specific considerations. This study addresses this gap by creating generic, sentence-level style embeddings crucial for style-centric tasks. Our approach is grounded on the premise that low-level text style changes can compose any high-level style. We hypothesize that applying this concept to representation learning enables the development of versatile text style embeddings. By fine-tuning a general-purpose text encoder using contrastive learning and standard cross-entropy loss, we aim to capture these low-level style shifts, anticipating that they offer insights applicable to high-level text styles. The outcomes prompt us to reconsider the underlying assumptions as the results do not always show that the learned style representations capture high-level text styles.
Anthology ID:
2025.insights-1.1
Volume:
The Sixth Workshop on Insights from Negative Results in NLP
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Aleksandr Drozd, João Sedoc, Shabnam Tafreshi, Arjun Akula, Raphael Shu
Venues:
insights | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.insights-1.1/
DOI:
Bibkey:
Cite (ACL):
Phil Ostheimer, Marius Kloft, and Sophie Fellenz. 2025. Challenging Assumptions in Learning Generic Text Style Embeddings. In The Sixth Workshop on Insights from Negative Results in NLP, pages 1–6, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Challenging Assumptions in Learning Generic Text Style Embeddings (Ostheimer et al., insights 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.insights-1.1.pdf