Zineddine Kahhoul
2024
Functional Text Dimensions for Arabic Text Classification
Zeyd Ferhat
|
Abir Betka
|
Riyadh Barka
|
Zineddine Kahhoul
|
Selma Boutiba
|
Mohamed Tiar
|
Habiba Dahmani
|
Ahmed Abdelali
Proceedings of The Second Arabic Natural Language Processing Conference
Text classification is of paramount importance in a wide range of applications, including information retrieval, extraction and sentiment analysis. The challenge of classifying and labelling text genres, especially in web-based corpora, has received considerable attention. The frequent absence of unambiguous genre information complicates the identification of text types. To address these issues, the Functional Text Dimensions (FTD) method has been introduced to provide a universal set of categories for text classification. This study presents the Arabic Functional Text Dimensions Corpus (AFTD Corpus), a carefully curated collection of documents for evaluating text classification in Arabic. The AFTD Corpus which we are making available to the community, consists of 3400 documents spanning 17 different class categories. Through a comprehensive evaluation using traditional machine learning and neural models, we assess the effectiveness of the FTD approach in the Arabic context. CAMeLBERT, a state-of-the-art model, achieved an impressive F1 score of 0.81 on our corpus. This research highlights the potential of the FTD method for improving text classification, especially for Arabic content, and underlines the importance of robust classification models in web applications.
2023
On Enhancing Fine-Tuning for Pre-trained Language Models
Abir Betka
|
Zeyd Ferhat
|
Riyadh Barka
|
Selma Boutiba
|
Zineddine Kahhoul
|
Tiar Lakhdar
|
Ahmed Abdelali
|
Habiba Dahmani
Proceedings of ArabicNLP 2023
The remarkable capabilities of Natural Language Models to grasp language subtleties has paved the way for their widespread adoption in diverse fields. However, adapting them for specific tasks requires the time-consuming process of fine-tuning, which consumes significant computational power and energy. Therefore, optimizing the fine-tuning time is advantageous. In this study, we propose an alternate approach that limits parameter manipulation to select layers. Our exploration led to identifying layers that offer the best trade-off between time optimization and performance preservation. We further validated this approach on multiple downstream tasks, and the results demonstrated its potential to reduce fine-tuning time by up to 50% while maintaining performance within a negligible deviation of less than 5%. This research showcases a promising technique for significantly improving fine-tuning efficiency without compromising task- or domain-specific learning capabilities.
Search
Co-authors
- Abir Betka 2
- Zeyd Ferhat 2
- Riyadh Barka 2
- Selma Boutiba 2
- Ahmed Abdelali 2
- show all...