Paul Rayson
2026
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Mo El-Haj | Paul Rayson | Mustafa Jarrar | Ignatius Ezeani | Saad Ezzini | Sina Ahmadi | Amal Haddad Haddad | Cynthia Amol | Ahmad Abdelali | Shadi Abudalfa
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Mo El-Haj | Paul Rayson | Mustafa Jarrar | Ignatius Ezeani | Saad Ezzini | Sina Ahmadi | Amal Haddad Haddad | Cynthia Amol | Ahmad Abdelali | Shadi Abudalfa
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Improving on State-of-the-Art Models for Sentiment Analysis on Saudi-English Code-Switching Text
Samaher Alghamdi | Paul Rayson | Reem Alotibi
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Samaher Alghamdi | Paul Rayson | Reem Alotibi
Proceedings of the 2nd Workshop on NLP for Languages Using Arabic Script
Inserting English words, phrases, or sentences while writing or speaking in the Saudi Arabic dialect has become a widespread phenomenon in Saudi society. This phenomenon is linguistically called code-switching. It remains unclear how current sentiment analysis methods perform on Saudi-English code-switching text. In this paper, we address this gap by conducting the first sentiment analysis study on Saudi-English code-switching text. We present the first Saudi-English Sentiment Analysis Code Switching Dataset (SESA-CSD) and establish baseline results on this dataset. By evaluating multiple state-of-the-art small language models, we achieve improvements over the baseline of 3% to 11% in both accuracy and macro-F1. Among all small language models, XLM-RoBERTa achieved the highest performance,with an accuracy of 95.50% and a macro-F1 of 95.53%. Our findings indicate that multilingual and Arabic small language models, such as XLM-RoBERTa, GigaBERT, and SaudiBERT, consistently outperform bilingual Arabic-English large language models, such as Fanar and ALLaM, across zero-shot and multiple few-shot settings.
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Hansi Hettiarachchi | Tharindu Ranasinghe | Alistair Plum | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Hansi Hettiarachchi | Tharindu Ranasinghe | Alistair Plum | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Overview of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Hansi Hettiarachchi | Tharindu Ranasinghe | Alistair Plum | Paul Rayson | Ruslan Mitkov | Mohamed Medhat Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
Hansi Hettiarachchi | Tharindu Ranasinghe | Alistair Plum | Paul Rayson | Ruslan Mitkov | Mohamed Medhat Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the Second Workshop on Language Models for Low-Resource Languages (LoResLM 2026)
The second workshop on Language Models for Low-Resource Languages (LoResLM 2026) was held in conjunction with the 19th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2026) in Rabat, Morocco. This workshop mainly aimed to provide a forum for researchers to share and discuss their ongoing work on language models (LMs) focusing on low-resource languages and dialects, following recent advancements in neural language models and their linguistic biases towards high- resource languages. LoResLM 2026 attracted a notable interest from the natural language processing (NLP) community, resulting in 55 accepted papers from 79 submissions. These contributions cover a broad range of low-resource languages from 13 language families and 11 diverse research areas, paving the way for future possibilities and promoting linguistic inclusivity in NLP.
2025
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
Mo El-Haj | Amal Haddad | Cynthia Amol | Sina Ahmadi | Hugh Paterson III | Ignatius Ezeani | Saad Ezzini | Paul Rayson
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
Mo El-Haj | Amal Haddad | Cynthia Amol | Sina Ahmadi | Hugh Paterson III | Ignatius Ezeani | Saad Ezzini | Paul Rayson
Proceedings of the 1st Workshop on NLP for Languages Using Arabic Script
Sinhala Encoder-only Language Models and Evaluation
Tharindu Ranasinghe | Hansi Hettiarachchi | Nadeesha Chathurangi Naradde Vidana Pathirana | Damith Premasiri | Lasitha Uyangodage | Isuri Nanomi Arachchige | Alistair Plum | Paul Rayson | Ruslan Mitkov
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Tharindu Ranasinghe | Hansi Hettiarachchi | Nadeesha Chathurangi Naradde Vidana Pathirana | Damith Premasiri | Lasitha Uyangodage | Isuri Nanomi Arachchige | Alistair Plum | Paul Rayson | Ruslan Mitkov
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Recently, language models (LMs) have produced excellent results in many natural language processing (NLP) tasks. However, their effectiveness is highly dependent on available pre-training resources, which is particularly challenging for low-resource languages such as Sinhala. Furthermore, the scarcity of benchmarks to evaluate LMs is also a major concern for low-resource languages. In this paper, we address these two challenges for Sinhala by (i) collecting the largest monolingual corpus for Sinhala, (ii) training multiple LMs on this corpus and (iii) compiling the first Sinhala NLP benchmark (Sinhala-GLUE) and evaluating LMs on it. We show the Sinhala LMs trained in this paper outperform the popular multilingual LMs, such as XLM-R and existing Sinhala LMs in downstream NLP tasks. All the trained LMs are publicly available. We also make Sinhala-GLUE publicly available as a public leaderboard, and we hope that it will enable further advancements in developing and evaluating LMs for Sinhala.
LENS: Learning Entities from Narratives of Skin Cancer
Daisy Monika Lal | Paul Rayson | Christopher Peter | Ignatius Ezeani | Mo El-Haj | Yafei Zhu | Yufeng Liu
Proceedings of the 31st International Conference on Computational Linguistics: System Demonstrations
Daisy Monika Lal | Paul Rayson | Christopher Peter | Ignatius Ezeani | Mo El-Haj | Yafei Zhu | Yufeng Liu
Proceedings of the 31st International Conference on Computational Linguistics: System Demonstrations
Learning entities from narratives of skin cancer (LENS) is an automatic entity recognition system built on colloquial writings from skin cancer-related Reddit forums. LENS encapsulates a comprehensive set of 24 labels that address clinical, demographic, and psychosocial aspects of skin cancer. Furthermore, we release LENS as a PyPI and pip package, making it easy for developers to download and install, and also provide a web application that allows users to get model predictions interactively, useful for researchers and individuals with minimal programming experience. Additionally, we publish the annotation guidelines designed specifically for spontaneous skin cancer narratives, that can be implemented to better understand and address challenges when developing corpora or systems for similar diseases. The model achieves an overall entity-level F1 score of 0.561, with notable performance for entities such as “CANC_T” (0.747), “STG” (0.788), “POB” (0.714), “GENDER” (0.750), “A/G” (0.714), and “PPL” (0.703). Other entities with significant results include “TRT” (0.625), “MED” (0.606), “AGE” (0.646), “EMO” (0.619), and “MHD” (0.5). We believe that LENS can serve as an essential tool supporting the analysis of patient discussions leading to improvements in the design and development of modern smart healthcare technologies.
Hindi Reading Comprehension: Do Large Language Models Exhibit Semantic Understanding?
Daisy Monika Lal | Paul Rayson | Mo El-Haj
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages
Daisy Monika Lal | Paul Rayson | Mo El-Haj
Proceedings of the First Workshop on Natural Language Processing for Indo-Aryan and Dravidian Languages
In this study, we explore the performance of four advanced Generative AI models—GPT-3.5, GPT-4, Llama3, and HindiGPT, for the Hindi reading comprehension task. Using a zero-shot, instruction-based prompting strategy, we assess model responses through a comprehensive triple evaluation framework using the HindiRC dataset. Our framework combines (1) automatic evaluation using ROUGE, BLEU, BLEURT, METEOR, and Cosine Similarity; (2) rating-based assessments focussing on correctness, comprehension depth, and informativeness; and (3) preference-based selection to identify the best responses. Human ratings indicate that GPT-4 outperforms the other LLMs on all parameters, followed by HindiGPT, GPT-3.5, and then Llama3. Preference-based evaluation similarly placed GPT-4 (80%) as the best model, followed by HindiGPT(74%). However, automatic evaluation showed GPT-4 to be the lowest performer on n-gram metrics, yet the best performer on semantic metrics, suggesting it captures deeper meaning and semantic alignment over direct lexical overlap, which aligns with its strong human evaluation scores. This study also highlights that even though the models mostly address literal factual recall questions with high precision, they still face the challenge of specificity and interpretive bias at times.
Proceedings of the First on Natural Language Processing and Language Models for Digital Humanities
Isuri Nanomi Arachchige | Francesca Frontini | Ruslan Mitkov | Paul Rayson
Proceedings of the First on Natural Language Processing and Language Models for Digital Humanities
Isuri Nanomi Arachchige | Francesca Frontini | Ruslan Mitkov | Paul Rayson
Proceedings of the First on Natural Language Processing and Language Models for Digital Humanities
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Hansi Hettiarachchi | Tharindu Ranasinghe | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Hansi Hettiarachchi | Tharindu Ranasinghe | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Uyangodage
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Overview of the First Workshop on Language Models for Low-Resource Languages (LoResLM 2025)
Hansi Hettiarachchi | Tharindu Ranasinghe | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Randunu Chandrakantha Uyangodage
Proceedings of the First Workshop on Language Models for Low-Resource Languages
Hansi Hettiarachchi | Tharindu Ranasinghe | Paul Rayson | Ruslan Mitkov | Mohamed Gaber | Damith Premasiri | Fiona Anting Tan | Lasitha Randunu Chandrakantha Uyangodage
Proceedings of the First Workshop on Language Models for Low-Resource Languages
The first Workshop on Language Models for Low-Resource Languages (LoResLM 2025) was held in conjunction with the 31st International Conference on Computational Linguistics (COLING 2025) in Abu Dhabi, United Arab Emirates. This workshop mainly aimed to provide a forum for researchers to share and discuss their ongoing work on language models (LMs) focusing on low-resource languages, following the recent advancements in neural language models and their linguistic biases towards high-resource languages. LoResLM 2025 attracted notable interest from the natural language processing (NLP) community, resulting in 35 accepted papers from 52 submissions. These contributions cover a broad range of low-resource languages from eight language families and 13 diverse research areas, paving the way for future possibilities and promoting linguistic inclusivity in NLP.
Proceedings of the first International Workshop on Nakba Narratives as Language Resources
Mustafa Jarrar | Nizar Habash | Mo El-Haj | Amal Haddad Haddad | Zeina Jallad | Camille Mansour | Diana Allan | Paul Rayson | Tymaa Hammouda | Sanad Malaysha
Proceedings of the first International Workshop on Nakba Narratives as Language Resources
Mustafa Jarrar | Nizar Habash | Mo El-Haj | Amal Haddad Haddad | Zeina Jallad | Camille Mansour | Diana Allan | Paul Rayson | Tymaa Hammouda | Sanad Malaysha
Proceedings of the first International Workshop on Nakba Narratives as Language Resources
The Nakba Lexicon: Building a Comprehensive Dataset from Palestinian Literature
Izza AbuHaija | Salim Al Mandhari | Mo El-Haj | Jonas Sibony | Paul Rayson
Proceedings of the first International Workshop on Nakba Narratives as Language Resources
Izza AbuHaija | Salim Al Mandhari | Mo El-Haj | Jonas Sibony | Paul Rayson
Proceedings of the first International Workshop on Nakba Narratives as Language Resources
This paper introduces the Nakba Lexicon, a comprehensive dataset derived from the poetry collection Asifa ‘Ala al-Iz‘aj (Sorry for the Disturbance) by Istiqlal Eid, a Palestinian poet from El-Birweh. Eid’s work poignantly reflects on themes of Palestinian identity, displacement, and resilience, serving as a resource for preserving linguistic and cultural heritage in the context of post-Nakba literature. The dataset is structured into ten thematic domains, including political terminology, memory and preservation, sensory and emotional lexicon, toponyms, nature, and external linguistic influences such as Hebrew, French, and English, thereby capturing the socio-political, emotional, and cultural dimensions of the Nakba. The Nakba Lexicon uniquely emphasises the contributions of women to Palestinian literary traditions, shedding light on often-overlooked narratives of resilience and cultural continuity. Advanced Natural Language Processing (NLP) techniques were employed to analyse the dataset, with fine-tuned pre-trained models such as ARABERT and MARBERT achieving F1-scores of 0.87 and 0.68 in language and lexical classification tasks, respectively, significantly outperforming traditional machine learning models. These results highlight the potential of domain-specific computational models to effectively analyse complex datasets, facilitating the preservation of marginalised voices. By bridging computational methods with cultural preservation, this study enhances the understanding of Palestinian linguistic heritage and contributes to broader efforts in documenting and analysing endangered narratives. The Nakba Lexicon paves the way for future interdisciplinary research, showcasing the role of NLP in addressing historical trauma, resilience, and cultural identity.
Toponym Resolution: Will Prompt Engineering Change Expectations?
Isuri Anuradha | Deshan Koshala Sumanathilaka | Ruslan Mitkov | Paul Rayson
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Isuri Anuradha | Deshan Koshala Sumanathilaka | Ruslan Mitkov | Paul Rayson
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Large Language Models(LLMs) have revolutionised the field of artificial intelligence and have been successfully employed in many disciplines, capturing widespread attention and enthusiasm. Many previous studies have established that Domain-specific Deep Learning models to competitively perform with the general-purpose LLMs (Maatouk et al., 2024;Lu et al., 2024). However, a suitable prompt which provides direct instructions and background information is expected to yield im- proved results (Kamruzzaman and Kim, 2024). The present study focuses on utilising LLMs for the Toponym Resolution task by incorporating Retrieval-Augmented Generation(RAG) and prompting techniques to surpass the results of the traditional Deep Learning models. Moreover, this study demonstrates that promising results can be achieved without relying on large amounts of labelled, domain-specific data. After a descriptive comparison between open-source and proprietary LLMs through different prompt engineering techniques, the GPT-4o model performs best compared to the other LLMs for the Toponym Resolution task.
FreeTxt: Analyse and Visualise Multilingual Qualitative Survey Data for Cultural Heritage Sites
Nouran Khallaf | Ignatius Ezeani | Dawn Knight | Paul Rayson | Mo El-Haj | John Vidler | James Davies | Fernando Alva-Manchego
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Nouran Khallaf | Ignatius Ezeani | Dawn Knight | Paul Rayson | Mo El-Haj | John Vidler | James Davies | Fernando Alva-Manchego
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
We introduce FreeTxt, a free and open-source web-based tool designed to support the analysis and visualisation of multilingual qualitative survey data, with a focus on low-resource languages. Developed in collaboration with stakeholders, FreeTxt integrates established techniques from corpus linguistics with modern natural language processing methods in an intuitive interface accessible to non-specialists. The tool currently supports bilingual processing and visualisation of English and Welsh responses, with ongoing extensions to other languages such as Vietnamese. Key functionalities include semantic tagging via PyMUSAS, multilingual sentiment analysis, keyword and collocation visualisation, and extractive summarisation. User evaluations with cultural heritage institutions demonstrate the system’s utility and potential for broader impact.
SENTimental - a Simple Multilingual Sentiment Annotation Tool
John Vidler | Paul Rayson | Dawn Knight
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
John Vidler | Paul Rayson | Dawn Knight
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Here we present SENTimental, a simple and fast web-based, mobile-friendly tool for capturing sentiment annotations from participants and citizen scientist volunteers to create training and testing data for low-resource languages. In contrast to existing tools, we focus on assigning broad values to segments of text over specific tags for tokens or spans to build datasets for training and testing LLMs. The SENTimental interface minimises barriers to entry with a goal of maximising the time a user spends in a flow state whereby they are able to quickly and accurately rate each text fragment without being distracted by the complexity of the interface. Designed from the outset to handle multilingual representations, SENTimental allows for parallel corpus data to be presented to the user and switched between instantly for immediate comparison. As such this allows for users in any loaded languages to contribute to the data gathered, building up comparable rankings in a simple structured dataset for later processing.
Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4)
Saad Ezzini | Hamza Alami | Ismail Berrada | Abdessamad Benlahbib | Abdelkader El Mahdaouy | Salima Lamsiyah | Hatim Derrouz | Amal Haddad Haddad | Mustafa Jarrar | Mo El-Haj | Ruslan Mitkov | Paul Rayson
Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4)
Saad Ezzini | Hamza Alami | Ismail Berrada | Abdessamad Benlahbib | Abdelkader El Mahdaouy | Salima Lamsiyah | Hatim Derrouz | Amal Haddad Haddad | Mustafa Jarrar | Mo El-Haj | Ruslan Mitkov | Paul Rayson
Proceedings of the 4th Workshop on Arabic Corpus Linguistics (WACL-4)
2024
Analysing Emotions in Cancer Narratives: A Corpus-Driven Approach
Daisy Monika Lal | Paul Rayson | Sheila A. Payne | Yufeng Liu
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Daisy Monika Lal | Paul Rayson | Sheila A. Payne | Yufeng Liu
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Cancer not only affects a patient’s physical health, but it can also elicit a wide spectrum of intense emotions in patients, friends, and family members. People with cancer and their carers (family member, partner, or friend) are increasingly turning to the web for information and support. Despite the expansion of sentiment analysis in the context of social media and healthcare, there is relatively less research on patient narratives, which are longer, more complex texts, and difficult to assess. In this exploratory work, we examine how patients and carers express their feelings about various aspects of cancer (treatments and stages). The objective of this paper is to illustrate with examples the nature of language in the clinical domain, as well as the complexities of language when performing automatic sentiment and emotion analysis. We perform a linguistic analysis of a corpus of cancer narratives collected from Reddit. We examine the performance of five state-of-the-art models (T5, DistilBERT, Roberta, RobertaGo, and NRCLex) to see how well they match with human comparisons separated by linguistic and medical background. The corpus yielded several surprising results that could be useful to sentiment analysis NLP experts. The linguistic issues encountered were classified into four categories: statements expressing a variety of emotions, ambiguous or conflicting statements with contradictory emotions, statements requiring additional context, and statements in which sentiment and emotions can be inferred but are not explicitly mentioned.
Medical-FLAVORS: A Figurative Language and Vocabulary Open Repository for Spanish in the Medical Domain
Lucia Pitarch | Emma Angles-Herrero | Yufeng Liu | Daisy Monika Lal | Jorge Gracia | Paul Rayson | Judith Rietjens
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Lucia Pitarch | Emma Angles-Herrero | Yufeng Liu | Daisy Monika Lal | Jorge Gracia | Paul Rayson | Judith Rietjens
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Metaphors shape the way we think by enabling the expression of one concept in terms of another one. For instance, cancer can be understood as a place from which one can go in and out, as a journey that one can traverse, or as a battle. Giving patients awareness of the way they refer to cancer and different narratives in which they can reframe it has been proven to be a key aspect when experiencing the disease. In this work, we propose a preliminary identification and representation of Spanish cancer metaphors using MIP (Metaphor Identification Procedure) and MetaNet. The created resource is the first openly available dataset for medical metaphors in Spanish. Thus, in the future, we expect to use it as the gold standard in automatic metaphor processing tasks, which will also serve to further populate the resource and understand how cancer is experienced and narrated.
Exploring the Suitability of Transformer Models to Analyse Mental Health Peer Support Forum Data for a Realist Evaluation
Matthew Coole | Paul Rayson | Zoe Glossop | Fiona Lobban | Paul Marshall | John Vidler
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Matthew Coole | Paul Rayson | Zoe Glossop | Fiona Lobban | Paul Marshall | John Vidler
Proceedings of the First Workshop on Patient-Oriented Language Processing (CL4Health) @ LREC-COLING 2024
Mental health peer support forums have become widely used in recent years. The emerging mental health crisis and the COVID-19 pandemic have meant that finding a place online for support and advice when dealing with mental health issues is more critical than ever. The need to examine, understand and find ways to improve the support provided by mental health forums is vital in the current climate. As part of this, we present our initial explorations in using modern transformer models to detect four key concepts (connectedness, lived experience, empathy and gratitude), which we believe are essential to understanding how people use mental health forums and will serve as a basis for testing more expansive realise theories about mental health forums in the future. As part of this work, we also replicate previously published results on empathy utilising an existing annotated dataset and test the other concepts on our manually annotated mental health forum posts dataset. These results serve as a basis for future research examining peer support forums.
The IgboAPI Dataset: Empowering Igbo Language Technologies through Multi-dialectal Enrichment
Chris Chinenye Emezue | Ifeoma Okoh | Chinedu Emmanuel Mbonu | Chiamaka Chukwuneke | Daisy Monika Lal | Ignatius Ezeani | Paul Rayson | Ijemma Onwuzulike | Chukwuma Onyebuchi Okeke | Gerald Okey Nweya | Bright Ikechukwu Ogbonna | Chukwuebuka Uchenna Oraegbunam | Esther Chidinma Awo-Ndubuisi | Akudo Amarachukwu Osuagwu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Chris Chinenye Emezue | Ifeoma Okoh | Chinedu Emmanuel Mbonu | Chiamaka Chukwuneke | Daisy Monika Lal | Ignatius Ezeani | Paul Rayson | Ijemma Onwuzulike | Chukwuma Onyebuchi Okeke | Gerald Okey Nweya | Bright Ikechukwu Ogbonna | Chukwuebuka Uchenna Oraegbunam | Esther Chidinma Awo-Ndubuisi | Akudo Amarachukwu Osuagwu
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
The Igbo language is facing a risk of becoming endangered, as indicated by a 2025 UNESCO study. This highlights the need to develop language technologies for Igbo to foster communication, learning and preservation. To create robust, impactful, and widely adopted language technologies for Igbo, it is essential to incorporate the multi-dialectal nature of the language. The primary obstacle in achieving dialectal-aware language technologies is the lack of comprehensive dialectal datasets. In response, we present the IgboAPI dataset, a multi-dialectal Igbo-English dictionary dataset, developed with the aim of enhancing the representation of Igbo dialects. Furthermore, we illustrate the practicality of the IgboAPI dataset through two distinct studies: one focusing on Igbo semantic lexicon and the other on machine translation. In the semantic lexicon project, we successfully establish an initial Igbo semantic lexicon for the Igbo semantic tagger, while in the machine translation study, we demonstrate that by finetuning existing machine translation systems using the IgboAPI dataset, we significantly improve their ability to handle dialectal variations in sentences.
Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Ruslan Mitkov | Saad Ezzini | Tharindu Ranasinghe | Ignatius Ezeani | Nouran Khallaf | Cengiz Acarturk | Matthew Bradbury | Mo El-Haj | Paul Rayson
Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Ruslan Mitkov | Saad Ezzini | Tharindu Ranasinghe | Ignatius Ezeani | Nouran Khallaf | Cengiz Acarturk | Matthew Bradbury | Mo El-Haj | Paul Rayson
Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Is it Offensive or Abusive? An Empirical Study of Hateful Language Detection of Arabic Social Media Texts
Salim Al Mandhari | Mo El-Haj | Paul Rayson
Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Salim Al Mandhari | Mo El-Haj | Paul Rayson
Proceedings of the First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security
Among many potential subjects studied in Sentiment Analysis, widespread offensive and abusive language on social media has triggered interest in reducing its risks on users; children in particular. This paper centres on distinguishing between offensive and abusive language detec- tion within Arabic social media texts through the employment of various machine and deep learning techniques. The techniques include Naïve Bayes (NB), Support Vector Machine (SVM), fastText, keras, and RoBERTa XML multilingual embeddings, which have demon- strated superior performance compared to other statistical machine learning methods and dif- ferent kinds of embeddings like fastText. The methods were implemented on two separate corpora from YouTube comments totalling 47K comments. The results demonstrated that all models, except NB, reached an accuracy of 82%. It was also shown that word tri-grams en- hance classification performance, though other tuning techniques were applied such as TF-IDF and grid-search. The linguistic findings, aimed at distinguishing between offensive and abu- sive language, were consistent with machine learning (ML) performance, which effectively classified the two distinct classes of sentiment: offensive and abusive.
2023
Abstractive Hindi Text Summarization: A Challenge in a Low-Resource Setting
Daisy Monika Lal | Paul Rayson | Krishna Pratap Singh | Uma Shanker Tiwary
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
Daisy Monika Lal | Paul Rayson | Krishna Pratap Singh | Uma Shanker Tiwary
Proceedings of the 20th International Conference on Natural Language Processing (ICON)
The Internet has led to a surge in text data in Indian languages; hence, text summarization tools have become essential for information retrieval. Due to a lack of data resources, prevailing summarizing systems in Indian languages have been primarily dependent on and derived from English text summarization approaches. Despite Hindi being the most widely spoken language in India, progress in Hindi summarization is being delayed due to the lack of proper labeled datasets. In this preliminary work we address two major challenges in abstractive Hindi text summarization: creating Hindi language summaries and assessing the efficacy of the produced summaries. Since transfer learning (TL) has shown to be effective in low-resource settings, in order to assess the effectiveness of TL-based approach for summarizing Hindi text, we perform a comparative analysis using three encoder-decoder models: attention-based (BASE), multi-level (MED), and TL-based model (RETRAIN). In relation to the second challenge, we introduce the ICE-H evaluation metric based on the ICE metric for assessing English language summaries. The Rouge and ICE-H metrics are used for evaluating the BASE, MED, and RETRAIN models. According to the Rouge results, the RETRAIN model produces slightly better abstracts than the BASE and MED models for 20k and 100k training samples. The ICE-H metric, on the other hand, produces inconclusive results, which may be attributed to the limitations of existing Hindi NLP resources, such as word embeddings and POS taggers.
FinAraT5: A text to text model for financial Arabic text understanding and generation
Nadhem Zmandar | Mo El-Haj | Paul Rayson
Proceedings of the 4th Conference on Language, Data and Knowledge
Nadhem Zmandar | Mo El-Haj | Paul Rayson
Proceedings of the 4th Conference on Language, Data and Knowledge
Open-Source Thesaurus Development for Under-Resourced Languages: a Welsh Case Study
Nouran Khallaf | Elin Arfon | Mo El-Haj | Jonathan Morris | Dawn Knight | Paul Rayson | Tymaa Hasanain Hammouda | Mustafa Jarrar
Proceedings of the 4th Conference on Language, Data and Knowledge
Nouran Khallaf | Elin Arfon | Mo El-Haj | Jonathan Morris | Dawn Knight | Paul Rayson | Tymaa Hasanain Hammouda | Mustafa Jarrar
Proceedings of the 4th Conference on Language, Data and Knowledge
2022
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
The Financial Narrative Summarisation Shared Task (FNS 2022)
Mahmoud El-Haj | Nadhem Zmandar | Paul Rayson | Ahmed AbuRa’ed | Marina Litvak | Nikiforos Pittaras | George Giannakopoulos | Aris Kosmopoulos | Blanca Carbajo-Coronado | Antonio Moreno-Sandoval
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
Mahmoud El-Haj | Nadhem Zmandar | Paul Rayson | Ahmed AbuRa’ed | Marina Litvak | Nikiforos Pittaras | George Giannakopoulos | Aris Kosmopoulos | Blanca Carbajo-Coronado | Antonio Moreno-Sandoval
Proceedings of the 4th Financial Narrative Processing Workshop @LREC2022
This paper presents the results and findings of the Financial Narrative Summarisation Shared Task on summarising UK, Greek and Spanish annual reports. The shared task was organised as part of the Financial Narrative Processing 2022 Workshop (FNP 2022 Workshop). The Financial Narrative summarisation Shared Task (FNS-2022) has been running since 2020 as part of the Financial Narrative Processing (FNP) workshop series (El-Haj et al., 2022; El-Haj et al., 2021; El-Haj et al., 2020b; El-Haj et al., 2019c; El-Haj et al., 2018). The shared task included one main task which is the use of either abstractive or extractive automatic summarisers to summarise long documents in terms of UK, Greek and Spanish financial annual reports. This shared task is the third to target financial documents. The data for the shared task was created and collected from publicly available annual reports published by firms listed on the Stock Exchanges of UK, Greece and Spain. A total number of 14 systems from 7 different teams participated in the shared task.
CoFiF Plus: A French Financial Narrative Summarisation Corpus
Nadhem Zmandar | Tobias Daudert | Sina Ahmadi | Mahmoud El-Haj | Paul Rayson
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Nadhem Zmandar | Tobias Daudert | Sina Ahmadi | Mahmoud El-Haj | Paul Rayson
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Natural Language Processing is increasingly being applied in the finance and business industry to analyse the text of many different types of financial documents. Given the increasing growth of firms around the world, the volume of financial disclosures and financial texts in different languages and forms is increasing sharply and therefore the study of language technology methods that automatically summarise content has grown rapidly into a major research area. Corpora for financial narrative summarisation exists in English, but there is a significant lack of financial text resources in the French language. To remedy this, we present CoFiF Plus, the first French financial narrative summarisation dataset providing a comprehensive set of financial text written in French. The dataset has been extracted from French financial reports published in PDF file format. It is composed of 1,703 reports from the most capitalised companies in France (Euronext Paris) covering a time frame from 1995 to 2021. This paper describes the collection, annotation and validation of the financial reports and their summaries. It also describes the dataset and gives the results of some baseline summarisers. Our datasets will be openly available upon the acceptance of the paper.
IgboBERT Models: Building and Training Transformer Models for the Igbo Language
Chiamaka Chukwuneke | Ignatius Ezeani | Paul Rayson | Mahmoud El-Haj
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Chiamaka Chukwuneke | Ignatius Ezeani | Paul Rayson | Mahmoud El-Haj
Proceedings of the Thirteenth Language Resources and Evaluation Conference
This work presents a standard Igbo named entity recognition (IgboNER) dataset as well as the results from training and fine-tuning state-of-the-art transformer IgboNER models. We discuss the process of our dataset creation - data collection and annotation and quality checking. We also present experimental processes involved in building an IgboBERT language model from scratch as well as fine-tuning it along with other non-Igbo pre-trained models for the downstream IgboNER task. Our results show that, although the IgboNER task benefited hugely from fine-tuning large transformer model, fine-tuning a transformer model built from scratch with comparatively little Igbo text data seems to yield quite decent results for the IgboNER task. This work will contribute immensely to IgboNLP in particular as well as the wider African and low-resource NLP efforts Keywords: Igbo, named entity recognition, BERT models, under-resourced, dataset
AraSAS: The Open Source Arabic Semantic Tagger
Mahmoud El-Haj | Elvis de Souza | Nouran Khallaf | Paul Rayson | Nizar Habash
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur'an QA and Fine-Grained Hate Speech Detection
Mahmoud El-Haj | Elvis de Souza | Nouran Khallaf | Paul Rayson | Nizar Habash
Proceedinsg of the 5th Workshop on Open-Source Arabic Corpora and Processing Tools with Shared Tasks on Qur'an QA and Fine-Grained Hate Speech Detection
This paper presents (AraSAS) the first open-source Arabic semantic analysis tagging system. AraSAS is a software framework that provides full semantic tagging of text written in Arabic. AraSAS is based on the UCREL Semantic Analysis System (USAS) which was first developed to semantically tag English text. Similarly to USAS, AraSAS uses a hierarchical semantic tag set that contains 21 major discourse fields and 232 fine-grained semantic field tags. The paper describes the creation, validation and evaluation of AraSAS. In addition, we demonstrate a first case study to illustrate the affordances of applying USAS and AraSAS semantic taggers on the Zayed University Arabic-English Bilingual Undergraduate Corpus (ZAEBUC) (Palfreyman and Habash, 2022), where we show and compare the coverage of the two semantic taggers through running them on Arabic and English essays on different topics. The analysis expands to compare the taggers when run on texts in Arabic and English written by the same writer and texts written by male and by female students. Variables for comparison include frequency of use of particular semantic sub-domains, as well as the diversity of semantic elements within a text.
2021
Understanding who uses Reddit: Profiling individuals with a self-reported bipolar disorder diagnosis
Glorianna Jagfeld | Fiona Lobban | Paul Rayson | Steven Jones
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access
Glorianna Jagfeld | Fiona Lobban | Paul Rayson | Steven Jones
Proceedings of the Seventh Workshop on Computational Linguistics and Clinical Psychology: Improving Access
Recently, research on mental health conditions using public online data, including Reddit, has surged in NLP and health research but has not reported user characteristics, which are important to judge generalisability of findings. This paper shows how existing NLP methods can yield information on clinical, demographic, and identity characteristics of almost 20K Reddit users who self-report a bipolar disorder diagnosis. This population consists of slightly more feminine- than masculine-gendered mainly young or middle-aged US-based adults who often report additional mental health diagnoses, which is compared with general Reddit statistics and epidemiological studies. Additionally, this paper carefully evaluates all methods and discusses ethical issues.
Proceedings of the 3rd Financial Narrative Processing Workshop
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 3rd Financial Narrative Processing Workshop
Mahmoud El-Haj | Paul Rayson | Nadhem Zmandar
Proceedings of the 3rd Financial Narrative Processing Workshop
Joint abstractive and extractive method for long financial document summarization
Nadhem Zmandar | Abhishek Singh | Mahmoud El-Haj | Paul Rayson
Proceedings of the 3rd Financial Narrative Processing Workshop
Nadhem Zmandar | Abhishek Singh | Mahmoud El-Haj | Paul Rayson
Proceedings of the 3rd Financial Narrative Processing Workshop
The Financial Narrative Summarisation Shared Task FNS 2021
Nadhem Zmandar | Mahmoud El-Haj | Paul Rayson | Ahmed Abura’Ed | Marina Litvak | Geroge Giannakopoulos | Nikiforos Pittaras
Proceedings of the 3rd Financial Narrative Processing Workshop
Nadhem Zmandar | Mahmoud El-Haj | Paul Rayson | Ahmed Abura’Ed | Marina Litvak | Geroge Giannakopoulos | Nikiforos Pittaras
Proceedings of the 3rd Financial Narrative Processing Workshop
MasakhaNER: Named Entity Recognition for African Languages
David Ifeoluwa Adelani | Jade Abbott | Graham Neubig | Daniel D’souza | Julia Kreutzer | Constantine Lignos | Chester Palen-Michel | Happy Buzaaba | Shruti Rijhwani | Sebastian Ruder | Stephen Mayhew | Israel Abebe Azime | Shamsuddeen H. Muhammad | Chris Chinenye Emezue | Joyce Nakatumba-Nabende | Perez Ogayo | Aremu Anuoluwapo | Catherine Gitau | Derguene Mbaye | Jesujoba Alabi | Seid Muhie Yimam | Tajuddeen Rabiu Gwadabe | Ignatius Ezeani | Rubungo Andre Niyongabo | Jonathan Mukiibi | Verrah Otiende | Iroro Orife | Davis David | Samba Ngom | Tosin Adewumi | Paul Rayson | Mofetoluwa Adeyemi | Gerald Muriuki | Emmanuel Anebi | Chiamaka Chukwuneke | Nkiruka Odu | Eric Peter Wairagala | Samuel Oyerinde | Clemencia Siro | Tobius Saul Bateesa | Temilola Oloyede | Yvonne Wambui | Victor Akinode | Deborah Nabagereka | Maurice Katusiime | Ayodele Awokoya | Mouhamadane MBOUP | Dibora Gebreyohannes | Henok Tilaye | Kelechi Nwaike | Degaga Wolde | Abdoulaye Faye | Blessing Sibanda | Orevaoghene Ahia | Bonaventure F. P. Dossou | Kelechi Ogueji | Thierno Ibrahima DIOP | Abdoulaye Diallo | Adewale Akinfaderin | Tendai Marengereke | Salomey Osei
Transactions of the Association for Computational Linguistics, Volume 9
David Ifeoluwa Adelani | Jade Abbott | Graham Neubig | Daniel D’souza | Julia Kreutzer | Constantine Lignos | Chester Palen-Michel | Happy Buzaaba | Shruti Rijhwani | Sebastian Ruder | Stephen Mayhew | Israel Abebe Azime | Shamsuddeen H. Muhammad | Chris Chinenye Emezue | Joyce Nakatumba-Nabende | Perez Ogayo | Aremu Anuoluwapo | Catherine Gitau | Derguene Mbaye | Jesujoba Alabi | Seid Muhie Yimam | Tajuddeen Rabiu Gwadabe | Ignatius Ezeani | Rubungo Andre Niyongabo | Jonathan Mukiibi | Verrah Otiende | Iroro Orife | Davis David | Samba Ngom | Tosin Adewumi | Paul Rayson | Mofetoluwa Adeyemi | Gerald Muriuki | Emmanuel Anebi | Chiamaka Chukwuneke | Nkiruka Odu | Eric Peter Wairagala | Samuel Oyerinde | Clemencia Siro | Tobius Saul Bateesa | Temilola Oloyede | Yvonne Wambui | Victor Akinode | Deborah Nabagereka | Maurice Katusiime | Ayodele Awokoya | Mouhamadane MBOUP | Dibora Gebreyohannes | Henok Tilaye | Kelechi Nwaike | Degaga Wolde | Abdoulaye Faye | Blessing Sibanda | Orevaoghene Ahia | Bonaventure F. P. Dossou | Kelechi Ogueji | Thierno Ibrahima DIOP | Abdoulaye Diallo | Adewale Akinfaderin | Tendai Marengereke | Salomey Osei
Transactions of the Association for Computational Linguistics, Volume 9
We take a step towards addressing the under- representation of the African continent in NLP research by bringing together different stakeholders to create the first large, publicly available, high-quality dataset for named entity recognition (NER) in ten African languages. We detail the characteristics of these languages to help researchers and practitioners better understand the challenges they pose for NER tasks. We analyze our datasets and conduct an extensive empirical evaluation of state- of-the-art methods across both supervised and transfer learning settings. Finally, we release the data, code, and models to inspire future research on African NLP.1
2020
LexiDB: Patterns & Methods for Corpus Linguistic Database Management
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Twelfth Language Resources and Evaluation Conference
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Twelfth Language Resources and Evaluation Conference
LexiDB is a tool for storing, managing and querying corpus data. In contrast to other database management systems (DBMSs), it is designed specifically for text corpora. It improves on other corpus management systems (CMSs) because data can be added and deleted from corpora on the fly with the ability to add live data to existing corpora. LexiDB sits between these two categories of DBMSs and CMSs, more specialised to language data than a general purpose DBMS but more flexible than a traditional static corpus management system. Previous work has demonstrated the scalability of LexiDB in response to the growing need to be able to scale out for ever growing corpus datasets. Here, we present the patterns and methods developed in LexiDB for storage, retrieval and querying of multi-level annotated corpus data. These techniques are evaluated and compared to an existing CMS (Corpus Workbench CWB - CQP) and indexer (Lucene). We find that LexiDB consistently outperforms existing tools for corpus queries. This is particularly apparent with large corpora and when handling queries with large result sets
Developing an Arabic Infectious Disease Ontology to Include Non-Standard Terminology
Lama Alsudias | Paul Rayson
Proceedings of the Twelfth Language Resources and Evaluation Conference
Lama Alsudias | Paul Rayson
Proceedings of the Twelfth Language Resources and Evaluation Conference
Building ontologies is a crucial part of the semantic web endeavour. In recent years, research interest has grown rapidly in supporting languages such as Arabic in NLP in general but there has been very little research on medical ontologies for Arabic. We present a new Arabic ontology in the infectious disease domain to support various important applications including the monitoring of infectious disease spread via social media. This ontology meaningfully integrates the scientific vocabularies of infectious diseases with their informal equivalents. We use ontology learning strategies with manual checking to build the ontology. We applied three statistical methods for term extraction from selected Arabic infectious diseases articles: TF-IDF, C-value, and YAKE. We also conducted a study, by consulting around 100 individuals, to discover the informal terms related to infectious diseases in Arabic. In future work, we will automatically extract the relations for infectious disease concepts but for now these are manually created. We report two complementary experiments to evaluate the ontology. First, a quantitative evaluation of the term extraction results and an additional qualitative evaluation by a domain expert.
Infrastructure for Semantic Annotation in the Genomics Domain
Mahmoud El-Haj | Nathan Rutherford | Matthew Coole | Ignatius Ezeani | Sheryl Prentice | Nancy Ide | Jo Knight | Scott Piao | John Mariani | Paul Rayson | Keith Suderman
Proceedings of the Twelfth Language Resources and Evaluation Conference
Mahmoud El-Haj | Nathan Rutherford | Matthew Coole | Ignatius Ezeani | Sheryl Prentice | Nancy Ide | Jo Knight | Scott Piao | John Mariani | Paul Rayson | Keith Suderman
Proceedings of the Twelfth Language Resources and Evaluation Conference
We describe a novel super-infrastructure for biomedical text mining which incorporates an end-to-end pipeline for the collection, annotation, storage, retrieval and analysis of biomedical and life sciences literature, combining NLP and corpus linguistics methods. The infrastructure permits extreme-scale research on the open access PubMed Central archive. It combines an updatable Gene Ontology Semantic Tagger (GOST) for entity identification and semantic markup in the literature, with a NLP pipeline scheduler (Buster) to collect and process the corpus, and a bespoke columnar corpus database (LexiDB) for indexing. The corpus database is distributed to permit fast indexing, and provides a simple web front-end with corpus linguistics methods for sub-corpus comparison and retrieval. GOST is also connected as a service in the Language Application (LAPPS) Grid, in which context it is interoperable with other NLP tools and data in the Grid and can be combined with them in more complex workflows. In a literature based discovery setting, we have created an annotated corpus of 9,776 papers with 5,481,543 words.
COVID-19 and Arabic Twitter: How can Arab World Governments and Public Health Organizations Learn from Social Media?
Lama Alsudias | Paul Rayson
Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020
Lama Alsudias | Paul Rayson
Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020
In March 2020, the World Health Organization announced the COVID-19 outbreak as a pandemic. Most previous social media related research has been on English tweets and COVID-19. In this study, we collect approximately 1 million Arabic tweets from the Twitter streaming API related to COVID-19. Focussing on outcomes that we believe will be useful for Public Health Organizations, we analyse them in three different ways: identifying the topics discussed during the period, detecting rumours, and predicting the source of the tweets. We use the k-means algorithm for the first goal with k=5. The topics discussed can be grouped as follows: COVID-19 statistics, prayers for God, COVID-19 locations, advise and education for prevention, and advertising. We sample 2000 tweets and label them manually for false information, correct information, and unrelated. Then, we apply three different machine learning algorithms, Logistic Regression, Support Vector Classification, and Naïve Bayes with two sets of features, word frequency approach and word embeddings. We find that Machine Learning classifiers are able to correctly identify the rumour related tweets with 84% accuracy. We also try to predict the source of the rumour related tweets depending on our previous model which is about classifying tweets into five categories: academic, media, government, health professional, and public. Around (60%) of the rumour related tweets are classified as written by health professionals and academics.
Unfinished Business: Construction and Maintenance of a Semantically Tagged Historical Parliamentary Corpus, UK Hansard from 1803 to the present day
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Second ParlaCLARIN Workshop
Matthew Coole | Paul Rayson | John Mariani
Proceedings of the Second ParlaCLARIN Workshop
Creating, curating and maintaining modern political corpora is becoming an ever more involved task. As interest from various social bodies and the general public in political discourse grows so too does the need to enrich such datasets with metadata and linguistic annotations. Beyond this, such corpora must be easy to browse and search for linguists, social scientists, digital humanists and the general public. We present our efforts to compile a linguistically annotated and semantically tagged version of the Hansard corpus from 1803 right up to the present day. This involves combining multiple sources of documents and transcripts. We describe our toolchain for tagging; using several existing tools that provide tokenisation, part-of-speech tagging and semantic annotations. We also provide an overview of our bespoke web-based search interface built on LexiDB. In conclusion, we examine the completed corpus by looking at four case studies including semantic categories made available by our toolchain.
Search
Fix author
Co-authors
- Mahmoud El-Haj 19
- Mo El-Haj 12
- Scott S.L. Piao 12
- Ignatius Ezeani 10
- Ruslan Mitkov 9
- Nadhem Zmandar 7
- Dawn Knight 6
- Daisy Monika Lal 6
- Tharindu Ranasinghe 6
- Hansi Hettiarachchi 5
- Damith Premasiri 5
- Lama Alsudias 4
- Matthew Coole 4
- Saad Ezzini 4
- Mustafa Jarrar 4
- Nouran Khallaf 4
- Andrew Moore 4
- Fiona Anting Tan 4
- Lasitha Uyangodage 4
- Sina Ahmadi 3
- Dawn Archer 3
- Chiamaka Chukwuneke 3
- Mohamed Gaber 3
- Amal Haddad Haddad 3
- Yufeng Liu 3
- John Mariani 3
- Olga Mudraya 3
- Alistair Plum 3
- John Vidler 3
- Stephen Wattam 3
- Steve Young 3
- Ahmed AbuRa’ed 2
- Salim Al Mandhari 2
- Cynthia Amol 2
- Isuri Nanomi Arachchige 2
- Francesca Bianchi 2
- Carmen Dayrell 2
- Chris Chinenye Emezue 2
- Roger Garside 2
- Nizar Habash 2
- Jo Knight 2
- David Leslie 2
- Marina Litvak 2
- Fiona Lobban 2
- Tony McEnery 2
- Henry Moss 2
- Rao Muhammad Adeel Nawab 2
- Nikiforos Pittaras 2
- Martin Walker 2
- Andrew Wilson 2
- Jade Abbott 1
- Ahmad Abdelali 1
- Mariam Aboelezz 1
- Izza AbuHaija 1
- Shadi Abudalfa 1
- Cengiz Acartürk 1
- David Ifeoluwa Adelani 1
- Tosin Adewumi 1
- Mofetoluwa Adeyemi 1
- Orevaoghene Ahia 1
- Adewale Akinfaderin 1
- Victor Akinode 1
- Jesujoba Alabi 1
- Hamza Alami 1
- Marc Alexander 1
- Samaher Alghamdi 1
- Diana Allan 1
- Reem Alotibi 1
- Fernando Alva-Manchego 1
- Jean Anderson 1
- Emmanuel Anebi 1
- Emma Angles-Herrero 1
- Aremu Anuoluwapo 1
- Isuri Anuradha 1
- Elin Arfon 1
- Vasiliki Athanasakou 1
- Eric Atwell 1
- Esther Chidinma Awo-Ndubuisi 1
- Ayodele Awokoya 1
- Israel Abebe Azime 1
- Bogdan Babych 1
- Tobius Saul Bateesa 1
- Abdessamad Benlahbib 1
- Ismail Berrada 1
- Damon Berridge 1
- Houda Bouamor 1
- Matthew Bradbury 1
- Happy Buzaaba 1
- Blanca Carbajo-Coronado 1
- Thierno Ibrahima DIOP 1
- Tobias Daudert 1
- Davis David 1
- James Davies 1
- Hatim Derrouz 1
- Abdoulaye Diallo 1
- Bonaventure F. P. Dossou 1
- Angela D’Egidio 1
- Daniel D’souza 1
- Abdelkader El Mahdaouy 1
- Abdoulaye Faye 1
- Sira Ferradans 1
- William H. Fletcher 1
- Francesca Frontini 1
- Mohamed Medhat Gaber 1
- Dibora Gebreyohannes 1
- Geroge Giannakopoulos 1
- George Giannakopoulos 1
- Catherine Gitau 1
- Zoe Glossop 1
- Jorge Gracia 1
- Tajuddeen Rabiu Gwadabe 1
- Amal Haddad 1
- Tymaa Hasanain Hammouda 1
- Tymaa Hammouda 1
- Nancy Ide 1
- Glorianna Jagfeld 1
- Zeina Jallad 1
- Ricardo-María Jiménez 1
- Steven JM Jones 1
- Maurice Katusiime 1
- Adam Kilgarriff 1
- Aris Kosmopoulos 1
- Julia Kreutzer 1
- Michal Křen 1
- Salima Lamsiyah 1
- Constantine Lignos 1
- Laura Löfberg 1
- Mouhamadane MBOUP 1
- Sanad Malaysha 1
- Camille Mansour 1
- Tendai Marengereke 1
- Paul Marshall 1
- Stephen Mayhew 1
- Derguene Mbaye 1
- Chinedu Emmanuel Mbonu 1
- Antonio Moreno-Sandoval 1
- Jonathan Morris 1
- Shamsuddeen Hassan Muhammad 1
- Jonathan Mukiibi 1
- Gerald Muriuki 1
- Deborah Nabagereka 1
- Joyce Nakatumba-Nabende 1
- Steven Neale 1
- Graham Neubig 1
- Samba Ngom 1
- Rubungo Andre Niyongabo 1
- Kelechi Nwaike 1
- Gerald Okey Nweya 1
- Nkiruka Odu 1
- Perez Ogayo 1
- Bright Ikechukwu Ogbonna 1
- Kelechi Ogueji 1
- Chukwuma Onyebuchi Okeke 1
- Ifeoma Okoh 1
- Temilola Oloyede 1
- Ijemma Onwuzulike 1
- Chukwuebuka Uchenna Oraegbunam 1
- Iroro Orife 1
- Salomey Osei 1
- Akudo Amarachukwu Osuagwu 1
- Verrah Akinyi Otiende 1
- Samuel Oyerinde 1
- Chester Palen-Michel 1
- Hugh Paterson III 1
- Nadeesha Chathurangi Naradde Vidana Pathirana 1
- Sheila A. Payne 1
- Christopher Peter 1
- Lucía Pitarch 1
- Sheryl Prentice 1
- Judith Rietjens 1
- Shruti Rijhwani 1
- Sebastian Ruder 1
- Nathan Rutherford 1
- Thomas Schleicher 1
- Jawad Shafi 1
- Muhammad Sharjeel 1
- Serge Sharoff 1
- Blessing Kudzaishe Sibanda 1
- Jonas Sibony 1
- Abhishek Singh 1
- Krishna Pratap Singh 1
- Clemencia Siro 1
- Elvis de Souza 1
- Keith Suderman 1
- Deshan Koshala Sumanathilaka 1
- Guangfan Sun 1
- Phoey Lee Teh 1
- Henok Tilaye 1
- Uma Shanker Tiwary 1
- Lasitha Randunu Chandrakantha Uyangodage 1
- Eric Peter Wairagala 1
- James Walkerdine 1
- Yvonne Wambui 1
- Gareth Watkins 1
- Degaga Wolde 1
- Seid Muhie Yimam 1
- Qi Yuan 1
- Yafei Zhu 1