Patrick Brandt
2023
ConfliBERT-Arabic: A Pre-trained Arabic Language Model for Politics, Conflicts and Violence
Sultan Alsarra
|
Luay Abdeljaber
|
Wooseong Yang
|
Niamat Zawad
|
Latifur Khan
|
Patrick Brandt
|
Javier Osorio
|
Vito D’Orazio
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
This study investigates the use of Natural Language Processing (NLP) methods to analyze politics, conflicts and violence in the Middle East using domain-specific pre-trained language models. We introduce Arabic text and present ConfliBERT-Arabic, a pre-trained language models that can efficiently analyze political, conflict and violence-related texts. Our technique hones a pre-trained model using a corpus of Arabic texts about regional politics and conflicts. Performance of our models is compared to baseline BERT models. Our findings show that the performance of NLP models for Middle Eastern politics and conflict analysis are enhanced by the use of domain-specific pre-trained local language models. This study offers political and conflict analysts, including policymakers, scholars, and practitioners new approaches and tools for deciphering the intricate dynamics of local politics and conflicts directly in Arabic.
2022
ConfliBERT: A Pre-trained Language Model for Political Conflict and Violence
Yibo Hu
|
MohammadSaleh Hosseini
|
Erick Skorupa Parolin
|
Javier Osorio
|
Latifur Khan
|
Patrick Brandt
|
Vito D’Orazio
Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Analyzing conflicts and political violence around the world is a persistent challenge in the political science and policy communities due in large part to the vast volumes of specialized text needed to monitor conflict and violence on a global scale. To help advance research in political science, we introduce ConfliBERT, a domain-specific pre-trained language model for conflict and political violence. We first gather a large domain-specific text corpus for language modeling from various sources. We then build ConfliBERT using two approaches: pre-training from scratch and continual pre-training. To evaluate ConfliBERT, we collect 12 datasets and implement 18 tasks to assess the models’ practical application in conflict research. Finally, we evaluate several versions of ConfliBERT in multiple experiments. Results consistently show that ConfliBERT outperforms BERT when analyzing political violence and conflict.
Search
Co-authors
- Latifur Khan 2
- Javier Osorio 2
- Vito D’Orazio 2
- Sultan Alsarra 1
- Luay Abdeljaber 1
- show all...