Shima Khanehzar


2023

pdf
Probing Power by Prompting: Harnessing Pre-trained Language Models for Power Connotation Framing
Shima Khanehzar | Trevor Cohn | Gosia Mikolajczak | Lea Frermann
Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics

When describing actions, subtle changes in word choice can evoke very different associations with the involved entities. For instance, a company ‘{{it employing} workers’ evokes a more positive connotation than the one ‘{{it exploiting}’ them. This concept is called {{it connotation}. This paper investigates whether pre-trained language models (PLMs) encode such subtle connotative information about {{it power differentials} between involved entities. We design a probing framework for power connotation, building on~{citet{sap-etal-2017-connotation}’s operationalization of {{it connotation frames}. We show that zero-shot prompting of PLMs leads to above chance prediction of power connotation, however fine-tuning PLMs using our framework drastically improves their accuracy. Using our fine-tuned models, we present a case study of {{it power dynamics} in US news reporting on immigration, showing the potential of our framework as a tool for understanding subtle bias in the media.

2021

pdf
Framing Unpacked: A Semi-Supervised Interpretable Multi-View Model of Media Frames
Shima Khanehzar | Trevor Cohn | Gosia Mikolajczak | Andrew Turpin | Lea Frermann
Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

Understanding how news media frame political issues is important due to its impact on public attitudes, yet hard to automate. Computational approaches have largely focused on classifying the frame of a full news article while framing signals are often subtle and local. Furthermore, automatic news analysis is a sensitive domain, and existing classifiers lack transparency in their predictions. This paper addresses both issues with a novel semi-supervised model, which jointly learns to embed local information about the events and related actors in a news article through an auto-encoding framework, and to leverage this signal for document-level frame classification. Our experiments show that: our model outperforms previous models of frame prediction; we can further improve performance with unlabeled training data leveraging the semi-supervised nature of our model; and the learnt event and actor embeddings intuitively corroborate the document-level predictions, providing a nuanced and interpretable article frame representation.

2019

pdf
Modeling Political Framing Across Policy Issues and Contexts
Shima Khanehzar | Andrew Turpin | Gosia Mikolajczak
Proceedings of the The 17th Annual Workshop of the Australasian Language Technology Association