ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD

Moustafa Al-Hajj, Mustafa Jarrar


Abstract
Using pre-trained transformer models such as BERT has proven to be effective in many NLP tasks. This paper presents our work to fine-tune BERT models for Arabic Word Sense Disambiguation (WSD). We treated the WSD task as a sentence-pair binary classification task. First, we constructed a dataset of labeled Arabic context-gloss pairs (~167k pairs) we extracted from the Arabic Ontology and the large lexicographic database available at Birzeit University. Each pair was labeled as True or False and target words in each context were identified and annotated. Second, we used this dataset for fine-tuning three pre-trained Arabic BERT models. Third, we experimented the use of different supervised signals used to emphasize target words in context. Our experiments achieved promising results (accuracy of 84%) although we used a large set of senses in the experiment.
Anthology ID:
2021.ranlp-1.5
Volume:
Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021)
Month:
September
Year:
2021
Address:
Held Online
Editors:
Ruslan Mitkov, Galia Angelova
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
35–43
Language:
URL:
https://aclanthology.org/2021.ranlp-1.5
DOI:
Bibkey:
Cite (ACL):
Moustafa Al-Hajj and Mustafa Jarrar. 2021. ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD. In Proceedings of the International Conference on Recent Advances in Natural Language Processing (RANLP 2021), pages 35–43, Held Online. INCOMA Ltd..
Cite (Informal):
ArabGlossBERT: Fine-Tuning BERT on Context-Gloss Pairs for WSD (Al-Hajj & Jarrar, RANLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-1/2021.ranlp-1.5.pdf