Evaluation of Word Embeddings for the Social Sciences

Ricardo Schiffers, Dagmar Kern, Daniel Hienert


Abstract
Word embeddings are an essential instrument in many NLP tasks. Most available resources are trained on general language from Web corpora or Wikipedia dumps. However, word embeddings for domain-specific language are rare, in particular for the social science domain. Therefore, in this work, we describe the creation and evaluation of word embedding models based on 37,604 open-access social science research papers. In the evaluation, we compare domain-specific and general language models for (i) language coverage, (ii) diversity, and (iii) semantic relationships. We found that the created domain-specific model, even with a relatively small vocabulary size, covers a large part of social science concepts, their neighborhoods are diverse in comparison to more general models Across all relation types, we found a more extensive coverage of semantic relationships.
Anthology ID:
2022.latechclfl-1.1
Volume:
Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Editors:
Stefania Degaetano, Anna Kazantseva, Nils Reiter, Stan Szpakowicz
Venue:
LaTeCHCLfL
SIG:
SIGHUM
Publisher:
International Conference on Computational Linguistics
Note:
Pages:
1–6
Language:
URL:
https://aclanthology.org/2022.latechclfl-1.1
DOI:
Bibkey:
Cite (ACL):
Ricardo Schiffers, Dagmar Kern, and Daniel Hienert. 2022. Evaluation of Word Embeddings for the Social Sciences. In Proceedings of the 6th Joint SIGHUM Workshop on Computational Linguistics for Cultural Heritage, Social Sciences, Humanities and Literature, pages 1–6, Gyeongju, Republic of Korea. International Conference on Computational Linguistics.
Cite (Informal):
Evaluation of Word Embeddings for the Social Sciences (Schiffers et al., LaTeCHCLfL 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-2/2022.latechclfl-1.1.pdf