Iago Alves Brito


2025

pdf bib
Modeling, Evaluating, and Embodying Personality in LLMs: A Survey
Iago Alves Brito | Julia Soares Dollis | Fernanda Bufon Färber | Pedro Schindler Freire Brasil Ribeiro | Rafael Teixeira Sousa | Arlindo Rodrigues Galvão Filho
Findings of the Association for Computational Linguistics: EMNLP 2025

As large language models (LLMs) become integral to social and interactive applications, the ability to model, control, and evaluate their personality traits has become a critical area of research. This survey provides a comprehensive and structured overview of the LLM-driven personality scenario. We introduce a functional taxonomy that organizes the field by how personality is modeled (from rule-based methods to model-centric and system-level LLM techniques), across which modalities it is expressed (extending beyond text to vision, speech, and immersive virtual reality), and how it is validated (covering both qualitative and quantitative evaluation paradigms). By contextualizing current advances and systematically analyzing the limitations of existing methods including subjectivity, context dependence, limited multimodal integration, and the lack of standardized evaluation protocols, we identify key research gaps. This survey serves as a guide for future inquiry, paving the way for the development LLMs with more consistent consistent, expressive, and trustworthy personality traits.

pdf bib
Proxy Barrier: A Hidden Repeater Layer Defense Against System Prompt Leakage and Jailbreaking
Pedro Schindler Freire Brasil Ribeiro | Iago Alves Brito | Rafael Teixeira Sousa | Fernanda Bufon Färber | Julia Soares Dollis | Arlindo Rodrigues Galvão Filho
Findings of the Association for Computational Linguistics: EMNLP 2025

Prompt injection and jailbreak attacks remain a critical vulnerability for deployed large language models (LLMs), allowing adversaries to bypass safety protocols and extract sensitive information. To address this, we present Proxy Barrier (ProB), a lightweight defense that interposes a proxy LLM between the user and the target model. The proxy LLM is tasked solely to repeat the user input, and any failure indicates the presence of an attempt to reveal or override system instructions, leading the malicious request to be detected and blocked before it reaches the target model. ProB therefore requires no access to model weights or prompts, and is deployable entirely at the API level. Experiments across multiple model families demonstrate that ProB achieves state-of-the-art resilience against prompt leakage and jailbreak attacks. Notably, our approach outperforms baselines and achieves up to 98.8% defense effectiveness, and shows robust protection across both open and closed-source LLMs when suitably paired with proxy models, while also keeping response quality intact.