Daphna Keidar
2022
Slangvolution: A Causal Analysis of Semantic Change and Frequency Dynamics in Slang
Daphna Keidar
|
Andreas Opedal
|
Zhijing Jin
|
Mrinmaya Sachan
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Languages are continuously undergoing changes, and the mechanisms that underlie these changes are still a matter of debate. In this work, we approach language evolution through the lens of causality in order to model not only how various distributional factors associate with language change, but how they causally affect it. In particular, we study slang, which is an informal language that is typically restricted to a specific group or social setting. We analyze the semantic change and frequency shift of slang words and compare them to those of standard, nonslang words. With causal discovery and causal inference techniques, we measure the effect that word type (slang/nonslang) has on both semantic change and frequency shift, as well as its relationship to frequency, polysemy and part of speech. Our analysis provides some new insights in the study of language change, e.g., we show that slang words undergo less semantic change but tend to have larger frequency shifts over time.
2021
Towards Automatic Bias Detection in Knowledge Graphs
Daphna Keidar
|
Mian Zhong
|
Ce Zhang
|
Yash Raj Shrestha
|
Bibek Paudel
Findings of the Association for Computational Linguistics: EMNLP 2021
With the recent surge in social applications relying on knowledge graphs, the need for techniques to ensure fairness in KG based methods is becoming increasingly evident. Previous works have demonstrated that KGs are prone to various social biases, and have proposed multiple methods for debiasing them. However, in such studies, the focus has been on debiasing techniques, while the relations to be debiased are specified manually by the user. As manual specification is itself susceptible to human cognitive bias, there is a need for a system capable of quantifying and exposing biases, that can support more informed decisions on what to debias. To address this gap in the literature, we describe a framework for identifying biases present in knowledge graph embeddings, based on numerical bias metrics. We illustrate the framework with three different bias measures on the task of profession prediction, and it can be flexibly extended to further bias definitions and applications. The relations flagged as biased can then be handed to decision makers for judgement upon subsequent debiasing.
Search
Co-authors
- Mian Zhong 1
- Ce Zhang 1
- Yash Raj Shrestha 1
- Bibek Paudel 1
- Andreas Opedal 1
- show all...