Florin Pop
2026
RoD-TAL: A Benchmark for Answering Questions in Romanian Driving License Exams
Andrei Vlad Man | Răzvan-Alexandru Smădu | Cristian-George Craciun | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Findings of the Association for Computational Linguistics: EACL 2026
Andrei Vlad Man | Răzvan-Alexandru Smădu | Cristian-George Craciun | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Findings of the Association for Computational Linguistics: EACL 2026
The intersection of AI and legal systems presents a growing need for tools that support legal education, particularly in under-resourced languages such as Romanian. In this work, we aim to evaluate the capabilities of Large Language Models (LLMs) and Vision-Language Models (VLMs) in understanding and reasoning about the Romanian driving law through textual and visual question-answering tasks. To facilitate this, we introduce RoD-TAL, a novel multimodal dataset comprising Romanian driving test questions, text-based and image-based, along with annotated legal references and explanations written by human experts. We implement and assess retrieval-augmented generation (RAG) pipelines, dense retrievers, and reasoning-optimized models across tasks, including Information Retrieval (IR), Question Answering (QA), Visual IR, and Visual QA. Our experiments demonstrate that domain-specific fine-tuning significantly enhances retrieval performance. At the same time, chain-of-thought prompting and specialized reasoning models improve QA accuracy, surpassing the minimum passing grades required for driving exams. We highlight the potential and limitations of applying LLMs and VLMs to legal education. We release the code and resources through the GitHub repository (https://github.com/vladman-25/RoD-TAL).
2025
RoLargeSum: A Large Dialect-Aware Romanian News Dataset for Summary, Headline, and Keyword Generation
Andrei-Marius Avram | Mircea Timpuriu | Andreea Iuga | Vlad-Cristian Matei | Iulian-Marius Taiatu | Tudor Găină | Dumitru-Clementin Cercel | Mihaela-Claudia Cercel | Florin Pop
Proceedings of the 31st International Conference on Computational Linguistics
Andrei-Marius Avram | Mircea Timpuriu | Andreea Iuga | Vlad-Cristian Matei | Iulian-Marius Taiatu | Tudor Găină | Dumitru-Clementin Cercel | Mihaela-Claudia Cercel | Florin Pop
Proceedings of the 31st International Conference on Computational Linguistics
Using supervised automatic summarisation methods requires sufficient corpora that include pairs of documents and their summaries. Similarly to many tasks in natural language processing, most of the datasets available for summarization are in English, posing challenges for developing summarization models in other languages. Thus, in this work, we introduce RoLargeSum, a novel large-scale summarization dataset for the Romanian language crawled from various publicly available news websites from Romania and the Republic of Moldova that were thoroughly cleaned to ensure a high-quality standard. RoLargeSum contains more than 615K news articles, together with their summaries, as well as their headlines, keywords, dialect, and other metadata that we found on the targeted websites. We further evaluated the performance of several BART variants and open-source large language models on RoLargeSum for benchmarking purposes. We manually evaluated the results of the best-performing system to gain insight into the potential pitfalls of this data set and future development.
2024
Investigating Large Language Models for Complex Word Identification in Multilingual and Multidomain Setups
Răzvan-Alexandru Smădu | David-Gabriel Ion | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Răzvan-Alexandru Smădu | David-Gabriel Ion | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Complex Word Identification (CWI) is an essential step in the lexical simplification task and has recently become a task on its own. Some variations of this binary classification task have emerged, such as lexical complexity prediction (LCP) and complexity evaluation of multi-word expressions (MWE). Large language models (LLMs) recently became popular in the Natural Language Processing community because of their versatility and capability to solve unseen tasks in zero/few-shot settings. Our work investigates LLM usage, specifically open-source models such as Llama 2, Llama 3, and Vicuna v1.5, and closed-source, such as ChatGPT-3.5-turbo and GPT-4o, in the CWI, LCP, and MWE settings. We evaluate zero-shot, few-shot, and fine-tuning settings and show that LLMs struggle in certain conditions or achieve comparable results against existing methods. In addition, we provide some views on meta-learning combined with prompt learning. In the end, we conclude that the current state of LLMs cannot or barely outperform existing methods, which are usually much smaller.
2023
From Fake to Hyperpartisan News Detection Using Domain Adaptation
Răzvan-Alexandru Smădu | Sebastian-Vasile Echim | Dumitru-Clementin Cercel | Iuliana Marin | Florin Pop
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Răzvan-Alexandru Smădu | Sebastian-Vasile Echim | Dumitru-Clementin Cercel | Iuliana Marin | Florin Pop
Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
Unsupervised Domain Adaptation (UDA) is a popular technique that aims to reduce the domain shift between two data distributions. It was successfully applied in computer vision and natural language processing. In the current work, we explore the effects of various unsupervised domain adaptation techniques between two text classification tasks: fake and hyperpartisan news detection. We investigate the knowledge transfer from fake to hyperpartisan news detection without involving target labels during training. Thus, we evaluate UDA, cluster alignment with a teacher, and cross-domain contrastive learning. Extensive experiments show that these techniques improve performance, while including data augmentation further enhances the results. In addition, we combine clustering and topic modeling algorithms with UDA, resulting in improved performances compared to the initial UDA setup.
2022
Legal Named Entity Recognition with Multi-Task Domain Adaptation
Răzvan-Alexandru Smădu | Ion-Robert Dinică | Andrei-Marius Avram | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Proceedings of the Natural Legal Language Processing Workshop 2022
Răzvan-Alexandru Smădu | Ion-Robert Dinică | Andrei-Marius Avram | Dumitru-Clementin Cercel | Florin Pop | Mihaela-Claudia Cercel
Proceedings of the Natural Legal Language Processing Workshop 2022
Named Entity Recognition (NER) is a well-explored area from Information Retrieval and Natural Language Processing with an extensive research community. Despite that, few languages, such as English and German, are well-resourced, whereas many other languages, such as Romanian, have scarce resources, especially in domain-specific applications. In this work, we address the NER problem in the legal domain from both Romanian and German languages and evaluate the performance of our proposed method based on domain adaptation. We employ multi-task learning to jointly train a neural network on two legal and general domains and perform adaptation among them. The results show that domain adaptation increase performances by a small amount, under 1%, while considerable improvements are in the recall metric.
2017
oIQa: An Opinion Influence Oriented Question Answering Framework with Applications to Marketing Domain
Dumitru-Clementin Cercel | Cristian Onose | Stefan Trausan-Matu | Florin Pop
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with RANLP 2017
Dumitru-Clementin Cercel | Cristian Onose | Stefan Trausan-Matu | Florin Pop
Proceedings of the 1st Workshop on Natural Language Processing and Information Retrieval associated with RANLP 2017
Understanding questions and answers in QA system is a major challenge in the domain of natural language processing. In this paper, we present a question answering system that influences the human opinions in a conversation. The opinion words are quantified by using a lexicon-based method. We apply Latent Semantic Analysis and the cosine similarity measure between candidate answers and each question to infer the answer of the chatbot.