FaBERT: Pre-training BERT on Persian Blogs

Mostafa Masumi, Seyed Soroush Majd, Mehrnoush Shamsfard, Hamid Beigy


Abstract
We introduce FaBERT, a Persian BERT-base model pre-trained on the HmBlogs corpus, encompassing both informal and formal Persian texts. FaBERT is designed to excel in traditional Natural Language Understanding (NLU) tasks, addressing the intricacies of diverse sentence structures and linguistic styles prevalent in the Persian language. In our comprehensive evaluation of FaBERT on 12 datasets in various downstream tasks, encompassing Sentiment Analysis (SA), Named Entity Recognition (NER), Natural Language Inference (NLI), Question Answering (QA), and Question Paraphrasing (QP), it consistently demonstrated improved performance, all achieved within a compact model size. The findings highlight the importance of utilizing diverse corpora, such as HmBlogs, to enhance the performance of language models like BERT in Persian Natural Language Processing (NLP) applications.
Anthology ID:
2025.wnut-1.10
Volume:
Proceedings of the Tenth Workshop on Noisy and User-generated Text
Month:
May
Year:
2025
Address:
Albuquerque, New Mexico, USA
Editors:
JinYeong Bak, Rob van der Goot, Hyeju Jang, Weerayut Buaphet, Alan Ramponi, Wei Xu, Alan Ritter
Venues:
WNUT | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
85–96
Language:
URL:
https://preview.aclanthology.org/corrections-2025-06/2025.wnut-1.10/
DOI:
10.18653/v1/2025.wnut-1.10
Bibkey:
Cite (ACL):
Mostafa Masumi, Seyed Soroush Majd, Mehrnoush Shamsfard, and Hamid Beigy. 2025. FaBERT: Pre-training BERT on Persian Blogs. In Proceedings of the Tenth Workshop on Noisy and User-generated Text, pages 85–96, Albuquerque, New Mexico, USA. Association for Computational Linguistics.
Cite (Informal):
FaBERT: Pre-training BERT on Persian Blogs (Masumi et al., WNUT 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/corrections-2025-06/2025.wnut-1.10.pdf