Evaluating Word Embeddings in Extremely Under-Resourced Languages: A Case Study in Bribri

Rolando Coto-Solano


Abstract
Word embeddings are critical for numerous NLP tasks but their evaluation in actual under-resourced settings needs further examination. This paper presents a case study in Bribri, a Chibchan language from Costa Rica. Four experiments were adapted from English: Word similarities, WordSim353 correlations, odd-one-out tasks and analogies. Here we discuss their adaptation to an under-resourced Indigenous language and we use them to measure semantic and morphological learning. We trained 96 word2vec models with different hyperparameter combinations. The best models for this under-resourced scenario were Skip-grams with an intermediate size (100 dimensions) and large window sizes (10). These had an average correlation of r=0.28 with WordSim353, a 76% accuracy in semantic odd-one-out and 70% accuracy in structural/morphological odd-one-out. The performance was lower for the analogies: The best models could find the appropriate semantic target amongst the first 25 results approximately 60% of the times, but could only find the morphological/structural target 11% of the times. Future research needs to further explore the patterns of morphological/structural learning, to examine the behavior of deep learning embeddings, and to establish a human baseline. This project seeks to improve Bribri NLP and ultimately help in its maintenance and revitalization.
Anthology ID:
2022.coling-1.393
Volume:
Proceedings of the 29th International Conference on Computational Linguistics
Month:
October
Year:
2022
Address:
Gyeongju, Republic of Korea
Venue:
COLING
SIG:
Publisher:
International Committee on Computational Linguistics
Note:
Pages:
4455–4467
Language:
URL:
https://aclanthology.org/2022.coling-1.393
DOI:
Bibkey:
Cite (ACL):
Rolando Coto-Solano. 2022. Evaluating Word Embeddings in Extremely Under-Resourced Languages: A Case Study in Bribri. In Proceedings of the 29th International Conference on Computational Linguistics, pages 4455–4467, Gyeongju, Republic of Korea. International Committee on Computational Linguistics.
Cite (Informal):
Evaluating Word Embeddings in Extremely Under-Resourced Languages: A Case Study in Bribri (Coto-Solano, COLING 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/auto-file-uploads/2022.coling-1.393.pdf