Disentangling Linguistic Features with Dimension-Wise Analysis of Vector Embeddings

Saniya Karwa, Navpreet Singh


Abstract
Understanding the inner workings of neural embeddings, particularly in models such as BERT, remains a challenge because of their high-dimensional and opaque nature. This paper proposes a framework for uncovering the specific dimensions of vector embeddings that encode distinct linguistic properties (LPs). We introduce the Linguistically Distinct Sentence Pairs (LDSP-10) dataset, which isolates ten key linguistic features such as synonymy, negation, tense, and quantity. Using this dataset, we analyze BERT embeddings with various methods, including the Wilcoxon signed-rank test, mutual information, and recursive feature elimination, to identify the most influential dimensions for each LP. We introduce a new metric, the Embedding Dimension Impact (EDI) score, which quantifies the relevance of each embedding dimension to a LP. Our findings show that certain properties, such as negation and polarity, are robustly encoded in specific dimensions, while others, like synonymy, exhibit more complex patterns. This study provides insights into the interpretability of embeddings, which can guide the development of more transparent and optimized language models, with implications for model bias mitigation and the responsible deployment of AI systems.
Anthology ID:
2025.trustnlp-main.30
Volume:
Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025)
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Trista Cao, Anubrata Das, Tharindu Kumarage, Yixin Wan, Satyapriya Krishna, Ninareh Mehrabi, Jwala Dhamala, Anil Ramakrishna, Aram Galystan, Anoop Kumar, Rahul Gupta, Kai-Wei Chang
Venues:
TrustNLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
461–488
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.30/
DOI:
Bibkey:
Cite (ACL):
Saniya Karwa and Navpreet Singh. 2025. Disentangling Linguistic Features with Dimension-Wise Analysis of Vector Embeddings. In Proceedings of the 5th Workshop on Trustworthy NLP (TrustNLP 2025), pages 461–488, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Disentangling Linguistic Features with Dimension-Wise Analysis of Vector Embeddings (Karwa & Singh, TrustNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.trustnlp-main.30.pdf