Patricia Martin - Rodilla

Also published as: Patricia Martin-Rodilla


2025

pdf bib
Enhancing Discourse Parsing for Local Structures from Social Media with LLM-Generated Data
Martial Pastor | Nelleke Oostdijk | Patricia Martin-Rodilla | Javier Parapar
Proceedings of the 31st International Conference on Computational Linguistics

We explore the use of discourse parsers for extracting a particular discourse structure in a real-world social media scenario. Specifically, we focus on enhancing parser performance through the integration of synthetic data generated by large language models (LLMs). We conduct experiments using a newly developed dataset of 1,170 local RST discourse structures, including 900 synthetic and 270 gold examples, covering three social media platforms: online news comments sections, a discussion forum (Reddit), and a social media messaging platform (Twitter). Our primary goal is to assess the impact of LLM-generated synthetic training data on parser performance in a raw text setting without pre-identified discourse units. While both top-down and bottom-up RST architectures greatly benefit from synthetic data, challenges remain in classifying evaluative discourse structures.

pdf bib
IEGPS-CSIC at SemEval-2025 Task 11: BERT-based approach for Multi-label Emotion Detection in English and Russian texts
Albina Sarymsakova | Patricia Martin - Rodilla
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

This paper presents an original approach for SemEval 2025 Task 11. Our study investigates various strategies to improve Text-Based Multi-label Emotion Detection task. Through experimental endeavors, we explore the benefits of contextualized vector representations by comparing multiple BERT models, including those specifically trained for emotion recognition. Additionally, we examine the impact of hyperparameters adjustments on model performance. For Subtask A, our approach achieved F1 scores of 0.71 on the English dataset and 0.84 on the Russian dataset. Our findings underscore that (1) monolingual BERT models demonstrate superior performance for English, whereas multilingual BERT models perform better for Russian; (2) pretrained emotion detection models proving less effective for this specific task compared to models with reduced vocabulary and embeddings focused on specific languages;(3) exclusive use of BERT-based models, without incorporating additional methods or optimization techniques, demonstrates promising results for multilabel emotion detection.