AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding

Wissam Antoun, Fady Baly, Hazem Hajj


Abstract
Advances in English language representation enabled a more sample-efficient pre-training task by Efficiently Learning an Encoder that Classifies Token Replacements Accurately (ELECTRA). Which, instead of training a model to recover masked tokens, it trains a discriminator model to distinguish true input tokens from corrupted tokens that were replaced by a generator network. On the other hand, current Arabic language representation approaches rely only on pretraining via masked language modeling. In this paper, we develop an Arabic language representation model, which we name AraELECTRA. Our model is pretrained using the replaced token detection objective on large Arabic text corpora. We evaluate our model on multiple Arabic NLP tasks, including reading comprehension, sentiment analysis, and named-entity recognition and we show that AraELECTRA outperforms current state-of-the-art Arabic language representation models, given the same pretraining data and with even a smaller model size.
Anthology ID:
2021.wanlp-1.20
Volume:
Proceedings of the Sixth Arabic Natural Language Processing Workshop
Month:
April
Year:
2021
Address:
Kyiv, Ukraine (Virtual)
Venue:
WANLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
191–195
Language:
URL:
https://aclanthology.org/2021.wanlp-1.20
DOI:
Bibkey:
Cite (ACL):
Wissam Antoun, Fady Baly, and Hazem Hajj. 2021. AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding. In Proceedings of the Sixth Arabic Natural Language Processing Workshop, pages 191–195, Kyiv, Ukraine (Virtual). Association for Computational Linguistics.
Cite (Informal):
AraELECTRA: Pre-Training Text Discriminators for Arabic Language Understanding (Antoun et al., WANLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2021.wanlp-1.20.pdf
Code
 aub-mind/araBERT
Data
ARCDArSentD-LEVSQuADTyDi QA