Mehwish Alam
2026
It’s All About the Confidence: An Unsupervised Approach for Multilingual Historical Entity Linking using Large Language Models
Cristian Santini | Marieke van Erp | Mehwish Alam
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Cristian Santini | Marieke van Erp | Mehwish Alam
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Despite the recent advancements in NLP with the advent of Large Language Models (LLMs), Entity Linking (EL) for historical texts remains challenging due to linguistic variation, noisy inputs, and evolving semantic conventions. Existing solutions either require substantial training data or rely on domain-specific rules that limit scalability. In this paper, we present MHEL-LLaMo (Multilingual Historical Entity Linking with Large Language MOdels), an unsupervised ensemble approach combining a Small Language Model (SLM) and an LLM. MHEL-LLaMo leverages a multilingual bi-encoder (BELA) for candidate retrieval and an instruction-tuned LLM for NIL prediction and candidate selection via prompt chaining. Our system uses SLM’s confidence scores to discriminate between easy and hard samples, applying an LLM only for hard cases. This strategy reduces computational costs while preventing hallucinations on straightforward cases. We evaluate MHEL-LLaMo on four established benchmarks in six European languages (English, Finnish, French, German, Italian and Swedish) from the 19th and 20th centuries. Results demonstrate that MHEL-LLaMo outperforms state-of-the-art models without requiring fine-tuning, offering a scalable solution for low-resource historical EL. Our error analysis reveals that 41% of false predictions exhibit semantic proximity to ground truth entities, highlighting the LLM’s accurate disambiguation of historical references.
ENEIDE: A High Quality Silver Standard Dataset for Named Entity Recognition and Linking in Historical Italian
Cristian Santini | Sebastian Barzaghi | Paolo Sernani | Emanuele Frontoni | Laura Melosi | Mehwish Alam
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Cristian Santini | Sebastian Barzaghi | Paolo Sernani | Emanuele Frontoni | Laura Melosi | Mehwish Alam
Proceedings of the Fifteenth Language Resources and Evaluation Conference
This paper introduces ENEIDE (Extracting Named Entities from Italian Digital Editions), a silver standard dataset for Named Entity Recognition and Linking (NERL) in historical Italian texts. The corpus comprises 2,111 documents with over 8,000 entity annotations semi-automatically extracted from two scholarly digital editions: Digital Zibaldone, the philosophical diary of the Italian poet Giacomo Leopardi (1798–1837), and Aldo Moro Digitale, the complete works of the Italian politician Aldo Moro (1916–1978). Annotations cover multiple entity types (person, location, organization, literary work) linked to Wikidata identifiers, including NIL entities that cannot be mapped to the knowledge graph. To the best of our knowledge, ENEIDE represents the first multi-domain, publicly available NERL dataset for historical Italian with training, development, and test splits. We present a methodology for semi-automatic annotations extraction from manually curated scholarly digital editions, including quality control and annotation enhancement procedures. Baseline experiments using state-of-the-art models demonstrate the dataset’s challenge for NERL and the gap between zero-shot approaches and fine-tuned models. The dataset’s diachronic coverage spanning two centuries makes it particularly suitable for temporal entity disambiguation and cross-domain evaluation. ENEIDE is released under a CC BY-NC-SA 4.0 license.
Analysing Lightweight Large Language Models for Biomedical Named Entity Recognition on Diverse Ouput Formats
Pierre Epron | Adrien Coulet | Mehwish Alam
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Pierre Epron | Adrien Coulet | Mehwish Alam
Proceedings of the Fifteenth Language Resources and Evaluation Conference
Despite their strong linguistic capabilities, Large Language Models (LLMs) are computationally demanding and require substantial resources for fine-tuning, which is unadapted to privacy and budget constraints of many healthcare settings. To address this, we present an experimental analysis focused on Biomedical Named Entity Recognition using lightweight LLMs, we evaluate the impact of different output formats on model performance. The results reveal that lightweight LLMs can achieve competitive performance compared to the larger models, highlighting their potential as lightweight yet effective alternatives for biomedical information extraction. Our analysis shows that instruction tuning over many distinct formats does not improve performance, but identifies several format consistently associated with better performance.
2025
Proceedings of the 5th Conference on Language, Data and Knowledge
Mehwish Alam | Andon Tchechmedjiev | Jorge Gracia | Dagmar Gromann | Maria Pia di Buono | Johanna Monti | Maxim Ionov
Proceedings of the 5th Conference on Language, Data and Knowledge
Mehwish Alam | Andon Tchechmedjiev | Jorge Gracia | Dagmar Gromann | Maria Pia di Buono | Johanna Monti | Maxim Ionov
Proceedings of the 5th Conference on Language, Data and Knowledge
2022
Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences
Haithem Afli | Mehwish Alam | Houda Bouamor | Cristina Blasi Casagran | Colleen Boland | Sahar Ghannay
Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences
Haithem Afli | Mehwish Alam | Houda Bouamor | Cristina Blasi Casagran | Colleen Boland | Sahar Ghannay
Proceedings of the LREC 2022 workshop on Natural Language Processing for Political Sciences
TF-IDF or Transformers for Arabic Dialect Identification? ITFLOWS participation in the NADI 2022 Shared Task
Fouad Shammary | Yiyi Chen | Zsolt T Kardkovacs | Mehwish Alam | Haithem Afli
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
Fouad Shammary | Yiyi Chen | Zsolt T Kardkovacs | Mehwish Alam | Haithem Afli
Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP)
This study targets the shared task of Nuanced Arabic Dialect Identification (NADI) organized with the Workshop on Arabic Natural Language Processing (WANLP). It further focuses on Subtask 1 on the identification of the Arabic dialects at the country level. More specifically, it studies the impact of a traditional approach such as TF-IDF and then moves on to study the impact of advanced deep learning based methods. These methods include fully fine-tuning MARBERT as well as adapter based fine-tuning of MARBERT with and without performing data augmentation. The evaluation shows that the traditional approach based on TF-IDF scores the best in terms of accuracy on TEST-A dataset, while, the fine-tuned MARBERT with adapter on augmented data scores the second on Macro F1-score on the TEST-B dataset. This led to the proposed system being ranked second on the shared task on average.
On the Impact of Temporal Representations on Metaphor Detection
Giorgio Ottolina | Matteo Palmonari | Manuel Vimercati | Mehwish Alam
Proceedings of the Thirteenth Language Resources and Evaluation Conference
Giorgio Ottolina | Matteo Palmonari | Manuel Vimercati | Mehwish Alam
Proceedings of the Thirteenth Language Resources and Evaluation Conference
State-of-the-art approaches for metaphor detection compare their literal - or core - meaning and their contextual meaning using metaphor classifiers based on neural networks. However, metaphorical expressions evolve over time due to various reasons, such as cultural and societal impact. Metaphorical expressions are known to co-evolve with language and literal word meanings, and even drive, to some extent, this evolution. This poses the question of whether different, possibly time-specific, representations of literal meanings may impact the metaphor detection task. To the best of our knowledge, this is the first study that examines the metaphor detection task with a detailed exploratory analysis where different temporal and static word embeddings are used to account for different representations of literal meanings. Our experimental analysis is based on three popular benchmarks used for metaphor detection and word embeddings extracted from different corpora and temporally aligned using different state-of-the-art approaches. The results suggest that the usage of different static word embedding methods does impact the metaphor detection task and some temporal word embeddings slightly outperform static methods. However, the results also suggest that temporal word embeddings may provide representations of the core meaning of the metaphor even too close to their contextual meaning, thus confusing the classifier. Overall, the interaction between temporal language evolution and metaphor detection appears tiny in the benchmark datasets used in our experiments. This suggests that future work for the computational analysis of this important linguistic phenomenon should first start by creating a new dataset where this interaction is better represented.
2010
PDTB XML: the XMLization of the Penn Discourse TreeBank 2.0
Xuchen Yao | Irina Borisova | Mehwish Alam
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
Xuchen Yao | Irina Borisova | Mehwish Alam
Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC'10)
The current study presents a conversion and unification of the Penn Discourse TreeBank 2.0 (PDTB) and the Penn TreeBank (PTB) under XML format. The main goal of the PDTB XML is to create a tool for efficient and broad querying of the syntax and discourse information simultaneously. The key stages of the project are developing proper cross-references between different data types and their representation in the modified TIGER-XML format, and then writing the required declarative languages (XML Schema). PTB XML is compatible with TIGER-XML format. The PDTB XML is developed as a unified format for the convenience of XQuery users; it integrates discourse relations and XML structures into one unified hierarchy and builds the cross references between the syntactic trees and the discourse relations. The syntactic and discourse elements are assigned with unique IDs in order to build cross-references between them. The converted corpus allows for a simultaneous search for syntactically specified discourse information based on the XQuery standard, which is illustrated with a simple example in the article.
Search
Fix author
Co-authors
- Haithem Afli 2
- Cristian Santini 2
- Sebastian Barzaghi 1
- Colleen Boland 1
- Irina Borisova 1
- Houda Bouamor 1
- Cristina Blasi Casagran 1
- Yiyi Chen 1
- Adrien Coulet 1
- Maria Pia Di Buono 1
- Pierre Epron 1
- Emanuele Frontoni 1
- Sahar Ghannay 1
- Jorge Gracia 1
- Dagmar Gromann 1
- Maxim Ionov 1
- Zsolt T. Kardkovács 1
- Laura Melosi 1
- Johanna Monti 1
- Giorgio Ottolina 1
- Matteo Palmonari 1
- Paolo Sernani 1
- Fouad Shammary 1
- Andon Tchechmedjiev 1
- Manuel Vimercati 1
- Xuchen Yao 1
- Marieke van Erp 1