Legend at ArAIEval Shared Task: Persuasion Technique Detection using a Language-Agnostic Text Representation Model
Olumide Ojo, Olaronke Adebanji, Hiram Calvo, Damian Dieke, Olumuyiwa Ojo, Seye Akinsanya, Tolulope Abiola, Anna Feldman
Abstract
In this paper, we share our best performing submission to the Arabic AI Tasks Evaluation Challenge (ArAIEval) at ArabicNLP 2023. Our focus was on Task 1, which involves identifying persuasion techniques in excerpts from tweets and news articles. The persuasion technique in Arabic texts was detected using a training loop with XLM-RoBERTa, a language-agnostic text representation model. This approach proved to be potent, leveraging fine-tuning of a multilingual language model. In our evaluation of the test set, we achieved a micro F1 score of 0.64 for subtask A of the competition.- Anthology ID:
- 2023.arabicnlp-1.61
- Volume:
- Proceedings of ArabicNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore (Hybrid)
- Editors:
- Hassan Sawaf, Samhaa El-Beltagy, Wajdi Zaghouani, Walid Magdy, Ahmed Abdelali, Nadi Tomeh, Ibrahim Abu Farha, Nizar Habash, Salam Khalifa, Amr Keleg, Hatem Haddad, Imed Zitouni, Khalil Mrini, Rawan Almatham
- Venues:
- ArabicNLP | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 594–599
- Language:
- URL:
- https://aclanthology.org/2023.arabicnlp-1.61
- DOI:
- 10.18653/v1/2023.arabicnlp-1.61
- Cite (ACL):
- Olumide Ojo, Olaronke Adebanji, Hiram Calvo, Damian Dieke, Olumuyiwa Ojo, Seye Akinsanya, Tolulope Abiola, and Anna Feldman. 2023. Legend at ArAIEval Shared Task: Persuasion Technique Detection using a Language-Agnostic Text Representation Model. In Proceedings of ArabicNLP 2023, pages 594–599, Singapore (Hybrid). Association for Computational Linguistics.
- Cite (Informal):
- Legend at ArAIEval Shared Task: Persuasion Technique Detection using a Language-Agnostic Text Representation Model (Ojo et al., ArabicNLP-WS 2023)
- PDF:
- https://preview.aclanthology.org/proper-vol2-ingestion/2023.arabicnlp-1.61.pdf