Abstract
Pre-training Transformer-based models such as BERT and ELECTRA on a collection of Arabic corpora, demonstrated by both AraBERT and AraELECTRA, shows an impressive result on downstream tasks. However, pre-training Transformer-based language models is computationally expensive, especially for large-scale models. Recently, Funnel Transformer has addressed the sequential redundancy inside Transformer architecture by compressing the sequence of hidden states, leading to a significant reduction in the pre-training cost. This paper empirically studies the performance and efficiency of building an Arabic language model with Funnel Transformer and ELECTRA objective. We find that our model achieves state-of-the-art results on several Arabic downstream tasks despite using less computational resources compared to other BERT-based models.- Anthology ID:
- 2021.findings-emnlp.108
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2021
- Month:
- November
- Year:
- 2021
- Address:
- Punta Cana, Dominican Republic
- Venue:
- Findings
- SIG:
- SIGDAT
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1255–1261
- Language:
- URL:
- https://aclanthology.org/2021.findings-emnlp.108
- DOI:
- 10.18653/v1/2021.findings-emnlp.108
- Cite (ACL):
- Sultan Alrowili and Vijay Shanker. 2021. ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective. In Findings of the Association for Computational Linguistics: EMNLP 2021, pages 1255–1261, Punta Cana, Dominican Republic. Association for Computational Linguistics.
- Cite (Informal):
- ArabicTransformer: Efficient Large Arabic Language Model with Funnel Transformer and ELECTRA Objective (Alrowili & Shanker, Findings 2021)
- PDF:
- https://preview.aclanthology.org/auto-file-uploads/2021.findings-emnlp.108.pdf
- Code
- salrowili/arabictransformer
- Data
- ARCD, SQuAD, TyDi QA