Towards Author-informed NLP: Mind the Social Bias

Inbar Pendzel, Einat Minkov


Abstract
Social text understanding is prone to fail when opinions are conveyed implicitly or sarcastically. It is therefore desired to model users’ contexts in processing the texts authored by them. In this work, we represent users within a social embedding space that was learned from the Twitter network at large-scale. Similar to word embeddings that encode lexical semantics, the network embeddings encode latent dimensions of social semantics. We perform extensive experiments on author-informed stance prediction, demonstrating improved generalization through inductive social user modeling, both within and across topics. Similar results were obtained for author-informed toxicity and incivility detection. The proposed approach may pave way to social NLP that considers user embeddings as contextual modality. However, our investigation also reveals that user stances are correlated with the personal socio-demographic traits encoded in their embeddings. Hence, author-informed NLP approaches may inadvertently model and reinforce socio-demographic and other social biases.
Anthology ID:
2025.emnlp-main.1764
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
34813–34826
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1764/
DOI:
Bibkey:
Cite (ACL):
Inbar Pendzel and Einat Minkov. 2025. Towards Author-informed NLP: Mind the Social Bias. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 34813–34826, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Towards Author-informed NLP: Mind the Social Bias (Pendzel & Minkov, EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1764.pdf
Checklist:
 2025.emnlp-main.1764.checklist.pdf