Dawn Knight
2026
CEFR-Cymraeg: A Dataset and Baseline Models for Language Proficiency Assessment in Welsh
Eeshan Waqar | Jonathan Davies | Dawn Knight | Fernando Alva-Manchego
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Eeshan Waqar | Jonathan Davies | Dawn Knight | Fernando Alva-Manchego
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We introduce CEFR-Cymraeg, the first dataset annotated with Common European Framework of Reference (CEFR) levels for Welsh. The dataset is built from learning materials for adult learners, carefully extracted from widely used coursebooks and verified by teachers of Welsh as a second language. It spans levels A1 to B2 and includes multiple units of analysis: sentences, dialogues, paragraphs, and documents. In total, 2,658 entries are provided with gold-standard CEFR annotations, making CEFR-Cymraeg a valuable resource for research on language learning and low-resourced Celtic languages. To illustrate its potential applications, we define language proficiency assessment as a multi-class classification task and fine-tune multilingual pre-trained language models. Given the limited size of the dataset, we also experiment with data augmentation. Results show that these models successfully capture proficiency distinctions and generalise well to Welsh, with the best-performing model reaching a weighted F1-score of 0.83. Qualitative analysis confirmed that most apparent errors reflected valid pedagogical variation rather than model inconsistencies. CEFR-Cymraeg establishes a benchmark resource for Welsh and opens new opportunities for educational NLP, corpus linguistics, and multilingual proficiency research.
FreeTxt-Vi: A Benchmarked Vietnamese-English Toolkit for Segmentation, Sentiment, and Summarisation
Hung Huy Nguyen | Mo El-Haj | Paul Rayson | Dawn Knight
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Hung Huy Nguyen | Mo El-Haj | Paul Rayson | Dawn Knight
Proceedings of the Fifteenth Language Resources and Evaluation Conference
FreeTxt-Vi is a free and open-source web-based toolkit for creating and analysing bilingual Vietnamese–English text collections. Positioned at the intersection of corpus linguistics and natural language processing (NLP), it enables users to build, explore, and interpret free-text data without requiring programming expertise. The system combines established corpus analysis features such as concordancing, keyword analysis, word relation exploration, and interactive visualisation with modern transformer-based NLP components for sentiment analysis and summarisation. A key contribution of this work is the design of a unified bilingual NLP pipeline that integrates a hybrid VnCoreNLP + Byte Pair Encoding (BPE) segmentation strategy, a fine-tuned TabularisAI sentiment classifier, and a fine-tuned Qwen2.5 model for abstractive summarisation. Unlike existing text analysis platforms, FreeTxt-Vi is evaluated as a set of language processing components. We conduct a three-part evaluation covering segmentation, sentiment analysis, and summarisation, and demonstrate that our approach achieves competitive or superior performance compared to widely used baselines in both Vietnamese and English. By reducing technical barriers to multilingual text analysis, FreeTxt-Vi supports reproducible research and promotes the development of language resources for Vietnamese, a widely spoken but underrepresented language in NLP. The toolkit is applicable to a wide range of domains, including education, digital humanities, cultural heritage, and the social sciences, where qualitative text data are common but often difficult to process at scale.
Creating a Hybrid Rule and Neural Network Based Semantic Tagger Using Silver Standard Data: The PyMUSAS Framework for Multilingual Semantic Annotation
Andrew Moore | Paul Rayson | Dawn Archer | Tim Czerniak | Dawn Knight | Daisy Monika Lal | Gearóid Ó Donnchadha | Mícheál J. Ó Meachair | Scott Piao | Elaine Uí Dhonnchadha | Johanna Vuorinen | Yan Yabo | Xiaobin Yang
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Andrew Moore | Paul Rayson | Dawn Archer | Tim Czerniak | Dawn Knight | Daisy Monika Lal | Gearóid Ó Donnchadha | Mícheál J. Ó Meachair | Scott Piao | Elaine Uí Dhonnchadha | Johanna Vuorinen | Yan Yabo | Xiaobin Yang
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Word Sense Disambiguation (WSD) has been widely evaluated using the semantic frameworks of WordNet, BabelNet, and the Oxford Dictionary of English. However, for the UCREL Semantic Analysis System (USAS) framework, no open extensive evaluation has been performed beyond lexical coverage or single language evaluation. In this work, we perform the largest semantic tagging evaluation of the rule based system that uses the lexical resources in the USAS framework covering five different languages using four existing datasets and one novel Chinese dataset. We create a new silver labelled English dataset, to overcome the lack of manually tagged training data, that we train and evaluate various mono and multilingual neural models in both mono and cross-lingual evaluation setups with comparisons to their rule based counterparts, and show how a rule based system can be enhanced with a neural network model. The resulting neural network models, including the data they were trained on, the Chinese evaluation dataset, and all of the code will be released as open resources.
Proffiliadur: Welsh Language Text Profiling Toolkit
Nicolás Gutiérrez-Rolón | Jonathan Davies | Tomos Williams | Dawn Knight | Fernando Alva-Manchego
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Nicolás Gutiérrez-Rolón | Jonathan Davies | Tomos Williams | Dawn Knight | Fernando Alva-Manchego
Proceedings of the Fifteenth Language Resources and Evaluation Conference
We introduce Proffiliadur, a Python toolkit for text profiling and readability analysis in Welsh. The toolkit computes 141 surface, lexical, morphological, and syntactic indices, designed to capture linguistic variation while incorporating a Welsh-specific tokenisation process that enables accurate morphological analysis and handles phenomena such as initial consonant mutation. Proffiliadur enables systematic assessment of text accessibility and supports applications in education, healthcare, and public communication. We demonstrate the toolkit’s usefulness through two complementary analyses. First, we examine texts written in accordance with the Cymraeg Clîr ("Clear Welsh") principles and compare them with regular Welsh texts. Second, we analyse texts across CEFR proficiency levels to explore how linguistic complexity varies with learner ability. We also evaluate feature-based and neural classification models for automatic complexity detection, showing that interpretable linguistic indices alone achieve strong predictive performance (F1 = 0.94), comparable to a fine-tuned transformer (F1 = 0.97). Proffiliadur provides the first dedicated text profiling toolkit for Welsh, offering reproducible, linguistically grounded measures of readability for a low-resource language.
2025
FreeTxt: Analyse and Visualise Multilingual Qualitative Survey Data for Cultural Heritage Sites
Nouran Khallaf | Ignatius Ezeani | Dawn Knight | Paul Rayson | Mo El-Haj | John Vidler | James Davies | Fernando Alva-Manchego
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Nouran Khallaf | Ignatius Ezeani | Dawn Knight | Paul Rayson | Mo El-Haj | John Vidler | James Davies | Fernando Alva-Manchego
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
We introduce FreeTxt, a free and open-source web-based tool designed to support the analysis and visualisation of multilingual qualitative survey data, with a focus on low-resource languages. Developed in collaboration with stakeholders, FreeTxt integrates established techniques from corpus linguistics with modern natural language processing methods in an intuitive interface accessible to non-specialists. The tool currently supports bilingual processing and visualisation of English and Welsh responses, with ongoing extensions to other languages such as Vietnamese. Key functionalities include semantic tagging via PyMUSAS, multilingual sentiment analysis, keyword and collocation visualisation, and extractive summarisation. User evaluations with cultural heritage institutions demonstrate the system’s utility and potential for broader impact.
SENTimental - a Simple Multilingual Sentiment Annotation Tool
John Vidler | Paul Rayson | Dawn Knight
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
John Vidler | Paul Rayson | Dawn Knight
Proceedings of the 15th International Conference on Recent Advances in Natural Language Processing - Natural Language Processing in the Generative AI Era
Here we present SENTimental, a simple and fast web-based, mobile-friendly tool for capturing sentiment annotations from participants and citizen scientist volunteers to create training and testing data for low-resource languages. In contrast to existing tools, we focus on assigning broad values to segments of text over specific tags for tokens or spans to build datasets for training and testing LLMs. The SENTimental interface minimises barriers to entry with a goal of maximising the time a user spends in a flow state whereby they are able to quickly and accurately rate each text fragment without being distracted by the complexity of the interface. Designed from the outset to handle multilingual representations, SENTimental allows for parallel corpus data to be presented to the user and switched between instantly for immediate comparison. As such this allows for users in any loaded languages to contribute to the data gathered, building up comparable rankings in a simple structured dataset for later processing.
UniversalCEFR: Enabling Open Multilingual Research on Language Proficiency Assessment
Joseph Marvin Imperial | Abdullah Barayan | Regina Stodden | Rodrigo Wilkens | Ricardo Muñoz Sánchez | Lingyun Gao | Melissa Torgbi | Dawn Knight | Gail Forey | Reka R. Jablonkai | Ekaterina Kochmar | Robert Joshua Reynolds | Eugénio Ribeiro | Horacio Saggion | Elena Volodina | Sowmya Vajjala | Thomas François | Fernando Alva-Manchego | Harish Tayyar Madabushi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Joseph Marvin Imperial | Abdullah Barayan | Regina Stodden | Rodrigo Wilkens | Ricardo Muñoz Sánchez | Lingyun Gao | Melissa Torgbi | Dawn Knight | Gail Forey | Reka R. Jablonkai | Ekaterina Kochmar | Robert Joshua Reynolds | Eugénio Ribeiro | Horacio Saggion | Elena Volodina | Sowmya Vajjala | Thomas François | Fernando Alva-Manchego | Harish Tayyar Madabushi
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
We introduce UniversalCEFR, a large-scale multilingual multidimensional dataset of texts annotated according to the CEFR (Common European Framework of Reference) scale in 13 languages. To enable open research in both automated readability and language proficiency assessment, UniversalCEFR comprises 505,807 CEFR-labeled texts curated from educational and learner-oriented resources, standardized into a unified data format to support consistent processing, analysis, and modeling across tasks and languages. To demonstrate its utility, we conduct benchmark experiments using three modelling paradigms: a) linguistic feature-based classification, b) fine-tuning pre-trained LLMs, and c) descriptor-based prompting of instruction-tuned LLMs. Our results further support using linguistic features and fine-tuning pretrained models in multilingual CEFR level assessment. Overall, UniversalCEFR aims to establish best practices in data distribution in language proficiency research by standardising dataset formats and promoting their accessibility to the global research community.
2023
Open-Source Thesaurus Development for Under-Resourced Languages: a Welsh Case Study
Nouran Khallaf | Elin Arfon | Mo El-Haj | Jonathan Morris | Dawn Knight | Paul Rayson | Tymaa Hasanain Hammouda | Mustafa Jarrar
Proceedings of the 4th Conference on Language, Data and Knowledge
Nouran Khallaf | Elin Arfon | Mo El-Haj | Jonathan Morris | Dawn Knight | Paul Rayson | Tymaa Hasanain Hammouda | Mustafa Jarrar
Proceedings of the 4th Conference on Language, Data and Knowledge
2022
PriPA: A Tool for Privacy-Preserving Analytics of Linguistic Data
Jeremie Clos | Emma McClaughlin | Pepita Barnard | Elena Nichele | Dawn Knight | Derek McAuley | Svenja Adolphs
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
Jeremie Clos | Emma McClaughlin | Pepita Barnard | Elena Nichele | Dawn Knight | Derek McAuley | Svenja Adolphs
Proceedings of the Workshop on Ethical and Legal Issues in Human Language Technologies and Multilingual De-Identification of Sensitive Data In Language Resources within the 13th Language Resources and Evaluation Conference
The days of large amorphous corpora collected with armies of Web crawlers and stored indefinitely are, or should be, coming to an end. There is a wealth of hidden linguistic information that is increasingly difficult to access, hidden in personal data that would be unethical and technically challenging to collect using traditional methods such as Web crawling and mass surveillance of online discussion spaces. Advances in privacy regulations such as GDPR and changes in the public perception of privacy bring into question the problematic ethical dimension of extracting information from unaware if not unwilling participants. Modern corpora need to adapt, be focused on testing specific hypotheses, and be respectful of the privacy of the people who generated its data. Our work focuses on using a distributed participatory approach and continuous informed consent to solve these issues, by allowing participants to voluntarily contribute their own censored personal data at a granular level. We evaluate our approach in a three-pronged manner, testing the accuracy of measurement of statistical measures of language with respect to standard corpus linguistics tools, evaluating the usability of our application with a participant involvement panel, and using the tool for a case study on health communication.
Creation of an Evaluation Corpus and Baseline Evaluation Scores for Welsh Text Summarisation
Mahmoud El-Haj | Ignatius Ezeani | Jonathan Morris | Dawn Knight
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
Mahmoud El-Haj | Ignatius Ezeani | Jonathan Morris | Dawn Knight
Proceedings of the 4th Celtic Language Technology Workshop within LREC2022
As part of the effort to increase the availability of Welsh digital technology, this paper introduces the first human vs metrics Welsh summarisation evaluation results and dataset, which we provide freely for research purposes to help advance the work on Welsh summarisation. The system summaries were created using an extractive graph-based Welsh summariser. The system summaries were evaluated by both human and a range of ROUGE metric variants (e.g. ROUGE 1, 2, L and SU4). The summaries and evaluation results will serve as benchmarks for the development of summarisers and evaluation metrics in other minority language contexts.
Introducing the Welsh Text Summarisation Dataset and Baseline Systems
Ignatius Ezeani | Mahmoud El-Haj | Jonathan Morris | Dawn Knight
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Ignatius Ezeani | Mahmoud El-Haj | Jonathan Morris | Dawn Knight
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Welsh is an official language in Wales and is spoken by an estimated 884,300 people (29.2% of the population of Wales). Despite this status and estimated increase in speaker numbers since the last (2011) census, Welsh remains a minority language undergoing revitalisation and promotion by Welsh Government and relevant stakeholders. As part of the effort to increase the availability of Welsh digital technology, this paper introduces the first Welsh summarisation dataset, which we provide freely for research purposes to help advance the work on Welsh summarisation. The dataset was created by Welsh speakers through manually summarising Welsh Wikipedia articles. In addition, the paper discusses the implementation and evaluation of different summarisation systems for Welsh. The summarisation systems and results will serve as benchmarks for the development of summarisers in other minority language contexts.
2019
Unsupervised multi-word term recognition in Welsh
Irena Spasić | David Owen | Dawn Knight | Andreas Artemiou
Proceedings of the Celtic Language Technology Workshop
Irena Spasić | David Owen | Dawn Knight | Andreas Artemiou
Proceedings of the Celtic Language Technology Workshop
Leveraging Pre-Trained Embeddings for Welsh Taggers
Ignatius Ezeani | Scott Piao | Steven Neale | Paul Rayson | Dawn Knight
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
Ignatius Ezeani | Scott Piao | Steven Neale | Paul Rayson | Dawn Knight
Proceedings of the 4th Workshop on Representation Learning for NLP (RepL4NLP-2019)
While the application of word embedding models to downstream Natural Language Processing (NLP) tasks has been shown to be successful, the benefits for low-resource languages is somewhat limited due to lack of adequate data for training the models. However, NLP research efforts for low-resource languages have focused on constantly seeking ways to harness pre-trained models to improve the performance of NLP systems built to process these languages without the need to re-invent the wheel. One such language is Welsh and therefore, in this paper, we present the results of our experiments on learning a simple multi-task neural network model for part-of-speech and semantic tagging for Welsh using a pre-trained embedding model from FastText. Our model’s performance was compared with those of the existing rule-based stand-alone taggers for part-of-speech and semantic taggers. Despite its simplicity and capacity to perform both tasks simultaneously, our tagger compared very well with the existing taggers.
2018
Leveraging Lexical Resources and Constraint Grammar for Rule-Based Part-of-Speech Tagging in Welsh
Steven Neale | Kevin Donnelly | Gareth Watkins | Dawn Knight
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Steven Neale | Kevin Donnelly | Gareth Watkins | Dawn Knight
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Towards a Welsh Semantic Annotation System
Scott Piao | Paul Rayson | Dawn Knight | Gareth Watkins
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
Scott Piao | Paul Rayson | Dawn Knight | Gareth Watkins
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)
2016
Lexical Coverage Evaluation of Large-scale Multilingual Semantic Lexicons for Twelve Languages
Scott Piao | Paul Rayson | Dawn Archer | Francesca Bianchi | Carmen Dayrell | Mahmoud El-Haj | Ricardo-María Jiménez | Dawn Knight | Michal Křen | Laura Löfberg | Rao Muhammad Adeel Nawab | Jawad Shafi | Phoey Lee Teh | Olga Mudraya
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
Scott Piao | Paul Rayson | Dawn Archer | Francesca Bianchi | Carmen Dayrell | Mahmoud El-Haj | Ricardo-María Jiménez | Dawn Knight | Michal Křen | Laura Löfberg | Rao Muhammad Adeel Nawab | Jawad Shafi | Phoey Lee Teh | Olga Mudraya
Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16)
The last two decades have seen the development of various semantic lexical resources such as WordNet (Miller, 1995) and the USAS semantic lexicon (Rayson et al., 2004), which have played an important role in the areas of natural language processing and corpus-based studies. Recently, increasing efforts have been devoted to extending the semantic frameworks of existing lexical knowledge resources to cover more languages, such as EuroWordNet and Global WordNet. In this paper, we report on the construction of large-scale multilingual semantic lexicons for twelve languages, which employ the unified Lancaster semantic taxonomy and provide a multilingual lexical knowledge base for the automatic UCREL semantic annotation system (USAS). Our work contributes towards the goal of constructing larger-scale and higher-quality multilingual semantic lexical resources and developing corpus annotation tools based on them. Lexical coverage is an important factor concerning the quality of the lexicons and the performance of the corpus annotation tools, and in this experiment we focus on evaluating the lexical coverage achieved by the multilingual lexicons and semantic annotation tools based on them. Our evaluation shows that some semantic lexicons such as those for Finnish and Italian have achieved lexical coverage of over 90% while others need further expansion.
2008
Introducing DRS (The Digital Replay System): a Tool for the Future of Corpus Linguistic Research and Analysis
Dawn Knight | Paul Tennent
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
Dawn Knight | Paul Tennent
Proceedings of the Sixth International Conference on Language Resources and Evaluation (LREC'08)
This paper outlines the new resource technologies, products and applications that have been constructed during the development of a multi-modal (MM hereafter) corpus tool on the DReSS project (Understanding New Forms of the Digital Record for e-Social Science), based at the University of Nottingham, England. The paper provides a brief outline of the DRS (Digital Replay System, the software tool at the heart of the corpus), highlighting its facility to display synchronised video, audio and textual data and, most relevantly, a concordance tool capable of interrogating data constructed from textual transcriptions anchored to video or audio, and from coded annotations of specific features of gesture-in-talk. This is complemented by a real-time demonstration of the DRS interface in-use as part of the LREC 2008 conference. This will serve to show the manner in which a system such as the DRS can be used to facilitate the assembly, storage and analysis of multi modal corpora, supporting both qualitative and quantitative approaches to the analysis of collected data.
Search
Fix author
Co-authors
- Paul Rayson 8
- Fernando Alva-Manchego 4
- Ignatius Ezeani 4
- Mo El-Haj 3
- Mahmoud El-Haj 3
- Jonathan Morris 3
- Scott S.L. Piao 3
- Dawn Archer 2
- Jonathan Davies 2
- Nouran Khallaf 2
- Steven Neale 2
- John Vidler 2
- Gareth Watkins 2
- Svenja Adolphs 1
- Elin Arfon 1
- Andreas Artemiou 1
- Abdullah Barayan 1
- Pepita Barnard 1
- Francesca Bianchi 1
- Jeremie Clos 1
- Tim Czerniak 1
- James Davies 1
- Carmen Dayrell 1
- Kevin Donnelly 1
- Gail Forey 1
- Thomas François 1
- Lingyun Gao 1
- Nicolás Gutiérrez-Rolón 1
- Tymaa Hasanain Hammouda 1
- Joseph Marvin Imperial 1
- Reka R. Jablonkai 1
- Mustafa Jarrar 1
- Ricardo-María Jiménez 1
- Ekaterina Kochmar 1
- Michal Křen 1
- Daisy Monika Lal 1
- Laura Löfberg 1
- Derek McAuley 1
- Emma McClaughlin 1
- Andrew Moore 1
- Olga Mudraya 1
- Ricardo Muñoz Sánchez 1
- Rao Muhammad Adeel Nawab 1
- Hung Huy Nguyen 1
- Elena Nichele 1
- David Owen 1
- Scott Piao 1
- Robert Joshua Reynolds 1
- Eugénio Ribeiro 1
- Horacio Saggion 1
- Jawad Shafi 1
- Irena Spasić 1
- Regina Stodden 1
- Harish Tayyar Madabushi 1
- Phoey Lee Teh 1
- Paul Tennent 1
- Melissa Torgbi 1
- Elaine Uí Dhonnchadha 1
- Sowmya Vajjala 1
- Elena Volodina 1
- Johanna Vuorinen 1
- Eeshan Waqar 1
- Rodrigo Wilkens 1
- Tomos Williams 1
- Yan Yabo 1
- Xiaobin Yang 1
- Gearóid Ó Donnchadha 1
- Mícheál J. Ó Meachair 1