Strategies for Arabic Readability Modeling

Juan Liberato, Bashar Alhafni, Muhamed Khalil, Nizar Habash


Abstract
Automatic readability assessment is relevant to building NLP applications for education, content analysis, and accessibility. However, Arabic readability assessment is a challenging task due to Arabic’s morphological richness and limited readability resources. In this paper, we present a set of experimental results on Arabic readability assessment using a diverse range of approaches, from rule-based methods to Arabic pretrained language models. We report our results on a newly created corpus at different textual granularity levels (words and sentence fragments). Our results show that combining different techniques yields the best results, achieving an overall macro F1 score of 86.7 at the word level and 87.9 at the fragment level on a blind test set. We make our code, data, and pretrained models publicly available.
Anthology ID:
2024.arabicnlp-1.5
Volume:
Proceedings of The Second Arabic Natural Language Processing Conference
Month:
August
Year:
2024
Address:
Bangkok, Thailand
Editors:
Nizar Habash, Houda Bouamor, Ramy Eskander, Nadi Tomeh, Ibrahim Abu Farha, Ahmed Abdelali, Samia Touileb, Injy Hamed, Yaser Onaizan, Bashar Alhafni, Wissam Antoun, Salam Khalifa, Hatem Haddad, Imed Zitouni, Badr AlKhamissi, Rawan Almatham, Khalil Mrini
Venues:
ArabicNLP | WS
SIG:
SIGARAB
Publisher:
Association for Computational Linguistics
Note:
Pages:
55–66
Language:
URL:
https://aclanthology.org/2024.arabicnlp-1.5
DOI:
10.18653/v1/2024.arabicnlp-1.5
Bibkey:
Cite (ACL):
Juan Liberato, Bashar Alhafni, Muhamed Khalil, and Nizar Habash. 2024. Strategies for Arabic Readability Modeling. In Proceedings of The Second Arabic Natural Language Processing Conference, pages 55–66, Bangkok, Thailand. Association for Computational Linguistics.
Cite (Informal):
Strategies for Arabic Readability Modeling (Liberato et al., ArabicNLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/autopr/2024.arabicnlp-1.5.pdf