Tony Mullen


2025

pdf bib
Exploiting contextual information to improve stance detection in informal political discourse with LLMs
Arman Engin Sucu | Yixiang Zhou | Mario A. Nascimento | Tony Mullen
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)

This study investigates the use of Large Language Models (LLMs) for political stance detection in informal online discourse, where language is often sarcastic, ambiguous, and context-dependent. We explore whether providing contextual information, specifically user profile summaries derived from historical posts, can improve classification accuracy. Using a real-world political forum dataset, we generate structured profiles that summarize users’ ideological leaning, recurring topics, and linguistic patterns. We evaluate seven state-of-the-art LLMs across baseline and context-enriched setups through a comprehensive cross-model evaluation. Our findings show that contextual prompts significantly boost accuracy, with improvements ranging from +17.5% to +38.5%, achieving up to 74% accuracy that surpasses previous approaches. We also analyze how profile size and post selection strategies affect performance, showing that strategically chosen political content yields better results than larger, randomly selected contexts. These findings underscore the value of incorporating user-level context to enhance LLM performance in nuanced political classification tasks.

2004

pdf bib
Incorporating topic information into semantic analysis models
Tony Mullen | Nigel Collier
Proceedings of the ACL Interactive Poster and Demonstration Sessions

pdf bib
Sentiment Analysis using Support Vector Machines with Diverse Information Sources
Tony Mullen | Nigel Collier
Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing

2000

pdf bib
Overfitting Avoidance for Stochastic Modeling of Attribute-Value Grammars
Tony Mullen | Miles Osborne
Fourth Conference on Computational Natural Language Learning and the Second Learning Language in Logic Workshop