Jacob A. Matthews
2025
Disentangling language change: sparse autoencoders quantify the semantic evolution of indigeneity in French
Jacob A. Matthews
|
Laurent Dubreuil
|
Imane Terhmina
|
Yunci Sun
|
Matthew Wilkens
|
Marten Van Schijndel
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
This study presents a novel approach to analyzing historical language change, focusing on the evolving semantics of the French term “indigène(s)” (“indigenous”) between 1825 and 1950. While existing approaches to measuring semantic change with contextual word embeddings (CWE) rely primarily on similarity measures or clustering, these methods may not be suitable for highly imbalanced datasets, and pose challenges for interpretation. For this reason, we propose an interpretable, feature-level approach to analyzing language change, which we use to trace the semantic evolution of “indigène(s)” over a 125-year period. Following recent work on sequence embeddings (O’Neill et al., 2024), we use k-sparse autoencoders (k-SAE) (Makhzani and Frey, 2013) to interpret over 210,000 CWEs generated using sentences sourced from the French National Library. We demonstrate that k-SAEs can learn interpretable features from CWEs, as well as how differences in feature activations across time periods reveal highly specific aspects of language change. In addition, we show that diachronic change in feature activation frequency reflects the evolution of French colonial legal structures during the 19th and 20th centuries.
2024
Semantics or spelling? Probing contextual word embeddings with orthographic noise
Jacob A. Matthews
|
John R. Starr
|
Marten van Schijndel
Findings of the Association for Computational Linguistics: ACL 2024
Pretrained language model (PLM) hidden states are frequently employed as contextual word embeddings (CWE): high-dimensional representations that encode semantic information given linguistic context. Across many areas of computational linguistics research, similarity between CWEs is interpreted as semantic similarity. However, it remains unclear exactly what information is encoded in PLM hidden states. We investigate this practice by probing PLM representations using minimal orthographic noise. We expect that if CWEs primarily encode semantic information, a single character swap in the input word will not drastically affect the resulting representation, given sufficient linguistic context. Surprisingly, we find that CWEs generated by popular PLMs are highly sensitive to noise in input data, and that this sensitivity is related to subword tokenization: the fewer tokens used to represent a word at input, the more sensitive its corresponding CWE. This suggests that CWEs capture information unrelated to word-level meaning and can be manipulated through trivial modifications of input data. We conclude that these PLM-derived CWEs may not be reliable semantic proxies, and that caution is warranted when interpreting representational similarity.
Search
Fix author
Co-authors
- Marten van Schijndel 2
- Laurent Dubreuil 1
- John R. Starr 1
- Yunci Sun 1
- Imane Terhmina 1
- show all...