Anita Gelboim


2025

pdf bib
TafBERTa: Learning Grammatical Rules from Small-Scale Language Acquisition Data in Hebrew
Anita Gelboim | Elior Sulem
Proceedings of the First BabyLM Workshop

We present TafBERTa, a compact RoBERTa based language model tailored for Hebrew child-directed speech (CDS). This work builds upon the BabyBERTa framework to address data scarcity and morphological complexity in Hebrew. Focusing on determiner-noun grammatical agreement phenomena, we show that TafBERTa achieves competitive performance compared to large-scale Hebrew language models while requiring significantly less data and computational resources. As part of this work, we also introduce a new corpus of Hebrew CDS, HTBerman, aligned with morphological metadata and our new grammatical evaluation benchmark for Hebrew, HeCLiMP, based on minimal pairs. Our results demonstrate the effectiveness of TafBERTa in grammaticality judgments and its potential for efficient NLP in low-resource settings.