pdf
bib
Proceedings of the Queer in AI Workshop
A Pranav
|
Alissa Valentine
|
Shaily Bhatt
|
Yanan Long
|
Arjun Subramonian
|
Amanda Bertsch
|
Anne Lauscher
|
Ankush Gupta
pdf
bib
abs
Studying the Representation of the LGBTQ+ Community in RuPaul’s Drag Race with LLM-Based Topic Modeling
Mika Hämäläinen
This study investigates the representation of LGBTQ+ community in the widely acclaimed reality television series RuPaul’s Drag Race through a novel application of large language model (LLM)-based topic modeling. By analyzing subtitles from seasons 1 to 16, the research identifies a spectrum of topics ranging from empowering themes, such as self-expression through drag, community support and positive body image, to challenges faced by the LGBTQ+ community, including homophobia, HIV and mental health. Employing an LLM allowed for nuanced exploration of these themes, overcoming the limitations of traditional word-based topic modeling.
pdf
bib
abs
Guardrails, not Guidance: Understanding Responses to LGBTQ+ Language in Large Language Models
Joshua Tint
Language models have integrated themselves into many aspects of digital life, shaping everything from social media to translation. This paper investigates how large language models (LLMs) respond to LGBTQ+ slang and heteronormative language. Through two experiments, the study assesses the emotional content and the impact of queer slang on responses from models including GPT-3.5, GPT-4o, Llama2, Llama3, Gemma and Mistral. The findings reveal that heteronormative prompts can trigger safety mechanisms, leading to neutral or corrective responses, while LGBTQ+ slang elicits more negative emotions. These insights punctuate the need to provide equitable outcomes for minority slangs and argots, in addition to eliminating explicit bigotry from language models.
pdf
bib
abs
Dehumanization of LGBTQ+ Groups in Sexual Interactions with ChatGPT
Alexandria Leto
|
Juan Vásquez
|
Alexis Palmer
|
Maria Leonor Pacheco
Given the widespread use of LLM-powered conversational agents such as ChatGPT, analyzing the ways people interact with them could provide valuable insights into human behavior. Prior work has shown that these agents are sometimes used in sexual contexts, such as to obtain advice, to role-play as sexual companions, or to generate erotica. While LGBTQ+ acceptance has increased in recent years, dehumanizing practices against minorities continue to prevail. In this paper, we hone in on this and perform an analysis of dehumanizing tendencies toward LGBTQ+ individuals by human users in their sexual interactions with ChatGPT. Through a series of experiments that model various concept vectors associated with distinct shades of dehumanization, we find evidence of the reproduction of harmful stereotypes. However, many user prompts lack indications of dehumanization, suggesting that the use of these agents is a complex and nuanced issue which warrants further investigation.
pdf
bib
abs
Leveraging Large Language Models in Detecting Anti-LGBTQIA+ User-generated Texts
Quoc-Toan Nguyen
|
Josh Nguyen
|
Tuan Pham
|
William John Teahan
Anti-LGBTQIA+ texts in user-generated content pose significant risks to online safety and inclusivity. This study investigates the capabilities and limitations of five widely adopted Large Language Models (LLMs)—DeepSeek-V3, GPT-4o, GPT-4o-mini, GPT-o1-mini, and Llama3.3-70B—in detecting such harmful content. Our findings reveal that while LLMs demonstrate potential in identifying offensive language, their effectiveness varies across models and metrics, with notable shortcomings in calibration. Furthermore, linguistic analysis exposes deeply embedded patterns of discrimination, reinforcing the urgency for improved detection mechanisms for this marginalised population. In summary, this study demonstrates the significant potential of LLMs for practical application in detecting anti-LGBTQIA+ user-generated texts and provides valuable insights from text analysis that can inform topic modelling. These findings contribute to developing safer digital platforms and enhancing protection for LGBTQIA+ individuals.
pdf
bib
abs
A Bayesian account of pronoun and neopronoun acquisition
Cassandra L Jacobs
|
Morgan Grobol
A major challenge to equity among members of queer communities is the use of one’s chosen forms of reference, such as personal names or pronouns. Speakers often dimiss errors in pronominal use as unintentional, and claim that their errors reflect many decades of fossilized mainstream language use, including attitudes or expectations about the relationship between one’s appearance and acceptable forms of reference. Here, we propose a modeling framework that allows language use and speech communities to change over time, including the adoption of neopronouns and other forms for self-reference. We present a probabilistic graphical modeling approach to pronominal reference that is flexible in the face of change and experience while also moving beyond form-to-meaning mappings. The model critically also does not rely on lexical covariance structure to learn referring expressions. We show that such a model can account for individual differences in how quickly pronouns or names are integrated into symbolic knowledge and can empower computational systems to be both flexible and respectful of queer people with diverse gender expression.