HalleluBERT: Let Every Token That Has Meaning Bear Its Weight

Raphael Scheible-Schmitt


Abstract
Transformer-based models have advanced NLP, yet Hebrew still lacks a RoBERTa encoder that is trained at scale and released in both base and large variants. We present HalleluBERT, a RoBERTa-based encoder family trained from scratch on 49.1 GB of deduplicated Hebrew web text and Wikipedia using a Hebrew-specific byte-level BPE vocabulary. On native Hebrew benchmarks for named entity recognition (BMC, NEMO) and sentiment classification (SMCD), HalleluBERT outperforms monolingual and multilingual baselines, and yields the highest unweighted mean score across the three benchmarks. We release model weights and tokenizer under the MIT license to support reproducible Hebrew NLP research.
Anthology ID:
2026.lrec-main.236
Volume:
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Month:
May
Year:
2026
Address:
Palma de Mallorca, Spain
Editors:
Stelios Piperidis, Núria Bel, Henk van den Heuvel, Nancy Ide, Simon Krek, Antonio Toral
Venue:
LREC
SIG:
Publisher:
ELRA Language Resource Association
Note:
Pages:
3022–3030
Language:
URL:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.236/
DOI:
Bibkey:
Cite (ACL):
Raphael Scheible-Schmitt. 2026. HalleluBERT: Let Every Token That Has Meaning Bear Its Weight. International Conference on Language Resources and Evaluation, main:3022–3030.
Cite (Informal):
HalleluBERT: Let Every Token That Has Meaning Bear Its Weight (Scheible-Schmitt, LREC 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-lrec/2026.lrec-main.236.pdf