Hiram Calvo


2025

pdf bib
CIC-IPN at SemEval-2025 Task 11: Transformer-Based Approach to Multi-Class Emotion Detection
Tolulope Abiola | Olumide Ebenezer Ojo | Grigori Sidorov | Olga Kolesnikova | Hiram Calvo
Proceedings of the 19th International Workshop on Semantic Evaluation (SemEval-2025)

This paper presents a multi-step approach for multi-label emotion classification as our system description paper for the SEMEVAL-2025 workshop Task A using machine learning and deep learning models. We test our methodology on English, Spanish, and low-resource Yoruba datasets, with each dataset labeled with five emotion categories: anger, fear, joy, sadness, and surprise. Our preprocessing involves text cleaning and feature extraction using bigrams and TF-IDF. We employ logistic regression for baseline classification and fine-tune Transformer models, such as BERT and XLM-RoBERTa, for improved performance. The Transformer-based models outperformed the logistic regression model, achieving micro-F1 scores of 0.7061, 0.7321, and 0.2825 for English, Spanish, and Yoruba, respectively. Notably, our Yoruba fine-tuned model outperformed the baseline model of the task organizers with micro-F1 score of 0.092, demonstrating the effectiveness of Transformer models in handling emotion classification tasks across diverse languages.

2023

pdf bib
Legend at ArAIEval Shared Task: Persuasion Technique Detection using a Language-Agnostic Text Representation Model
Olumide Ojo | Olaronke Adebanji | Hiram Calvo | Damian Dieke | Olumuyiwa Ojo | Seye Akinsanya | Tolulope Abiola | Anna Feldman
Proceedings of ArabicNLP 2023

In this paper, we share our best performing submission to the Arabic AI Tasks Evaluation Challenge (ArAIEval) at ArabicNLP 2023. Our focus was on Task 1, which involves identifying persuasion techniques in excerpts from tweets and news articles. The persuasion technique in Arabic texts was detected using a training loop with XLM-RoBERTa, a language-agnostic text representation model. This approach proved to be potent, leveraging fine-tuning of a multilingual language model. In our evaluation of the test set, we achieved a micro F1 score of 0.64 for subtask A of the competition.

2018

pdf bib
Distribution of Emotional Reactions to News Articles in Twitter
Omar Juárez Gambino | Hiram Calvo | Consuelo-Varinia García-Mendoza
Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)

2016

pdf bib
Automatic Text Generation by Learning from Literary Structures
Angel Daza | Hiram Calvo | Jesús Figueroa-Nazuno
Proceedings of the Fifth Workshop on Computational Linguistics for Literature

2014

pdf bib
CoNLL 2014 Shared Task: Grammatical Error Correction with a Syntactic N-gram Language Model from a Big Corpora
S. David Hernandez | Hiram Calvo
Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task

2009

pdf bib
Interpolated PLSI for Learning Plausible Verb Arguments
Hiram Calvo | Kentaro Inui | Yuji Matsumoto
Proceedings of the 23rd Pacific Asia Conference on Language, Information and Computation, Volume 2