Sello Ralethe


2025

pdf bib
Cross-Lingual Knowledge Projection and Knowledge Enhancement for Zero-Shot Question Answering in Low-Resource Languages
Sello Ralethe | Jan Buys
Proceedings of the 31st International Conference on Computational Linguistics

Knowledge bases (KBs) in low-resource languages (LRLs) are often incomplete, posing a challenge for developing effective question answering systems over KBs in those languages. On the other hand, the size of training corpora for LRL language models is also limited, restricting the ability to do zero-shot question answering using multilingual language models. To address these issues, we propose a two-fold approach. First, we introduce LeNS-Align, a novel cross-lingual mapping technique which improves the quality of word alignments extracted from parallel English-LRL text by combining lexical alignment, named entity recognition, and semantic alignment. LeNS-Align is applied to perform cross-lingual projection of KB triples. Second, we leverage the projected KBs to enhance multilingual language models’ question answering capabilities by augmenting the models with Graph Neural Networks embedding the projected knowledge. We apply our approach to map triples from two existing English KBs, ConceptNet and DBpedia, to create comprehensive LRL knowledge bases for four low-resource South African languages. Evaluation on three translated test sets show that our approach improves zero-shot question answering accuracy by up to 17% compared to baselines without KB access. The results highlight how our approach contributes to bridging the knowledge gap for low-resource languages by expanding knowledge coverage and question answering capabilities.

pdf bib
Cross-Lingual Knowledge Augmentation for Mitigating Generic Overgeneralization in Multilingual Language Models
Sello Ralethe | Jan Buys
Proceedings of the 5th Workshop on Multilingual Representation Learning (MRL 2025)

Generic statements like “birds fly” or “lions have manes” express generalizations about kinds that allow exceptions, yet language models tend to overgeneralize them to universal claims. While previous work showed that ASCENT KB could reduce this effect in English by 30-40%, the effectiveness of broader knowledge sources and the cross-lingual nature of this phenomenon remain unexplored. We investigate generic overgeneralization across English and four South African languages (isiZulu, isiXhosa, Sepedi, SeSotho), comparing the impact of ConceptNet and DBpedia against the previously used ASCENT KB. Our experiments show that ConceptNet reduces overgeneralization by 45-52%% for minority characteristic generics, while DBpedia achieves 48-58%% for majority characteristics, with combined knowledge bases reaching 67%% reduction. These improvements are consistent across all languages, though Nguni languages show higher baseline overgeneralization than Sotho-Tswana languages, potentially suggesting that morphological features may influence this semantic bias. Our findings demonstrate that commonsense and encyclopedic knowledge provide complementary benefits for multilingual semantic understanding, offering insights for developing NLP systems that capture nuanced semantics in low-resource languages.

2022

pdf bib
Generic Overgeneralization in Pre-trained Language Models
Sello Ralethe | Jan Buys
Proceedings of the 29th International Conference on Computational Linguistics

Generic statements such as “ducks lay eggs” make claims about kinds, e.g., ducks as a category. The generic overgeneralization effect refers to the inclination to accept false universal generalizations such as “all ducks lay eggs” or “all lions have manes” as true. In this paper, we investigate the generic overgeneralization effect in pre-trained language models experimentally. We show that pre-trained language models suffer from overgeneralization and tend to treat quantified generic statements such as “all ducks lay eggs” as if they were true generics. Furthermore, we demonstrate how knowledge embedding methods can lessen this effect by injecting factual knowledge about kinds into pre-trained language models. To this end, we source factual knowledge about two types of generics, minority characteristic generics and majority characteristic generics, and inject this knowledge using a knowledge embedding model. Our results show that knowledge injection reduces, but does not eliminate, generic overgeneralization, and that majority characteristic generics of kinds are more susceptible to overgeneralization bias.

2020

pdf bib
Adaptation of Deep Bidirectional Transformers for Afrikaans Language
Sello Ralethe
Proceedings of the Twelfth Language Resources and Evaluation Conference

The recent success of pretrained language models in Natural Language Processing has sparked interest in training such models for languages other than English. Currently, training of these models can either be monolingual or multilingual based. In the case of multilingual models, such models are trained on concatenated data of multiple languages. We introduce AfriBERT, a language model for the Afrikaans language based on Bidirectional Encoder Representation from Transformers (BERT). We compare the performance of AfriBERT against multilingual BERT in multiple downstream tasks, namely part-of-speech tagging, named-entity recognition, and dependency parsing. Our results show that AfriBERT improves the current state-of-the-art in most of the tasks we considered, and that transfer learning from multilingual to monolingual model can have a significant performance improvement on downstream tasks. We release the pretrained model for AfriBERT.