Jennifer L. Eberhardt
Also published as: Jennifer L Eberhardt
2025
Tell, Don’t Show: Leveraging Language Models’ Abstractive Retellings to Model Literary Themes
Li Lucy
|
Camilla Griffiths
|
Sarah Levine
|
Jennifer L Eberhardt
|
Dorottya Demszky
|
David Bamman
Findings of the Association for Computational Linguistics: ACL 2025
Conventional bag-of-words approaches for topic modeling, like latent Dirichlet allocation (LDA), struggle with literary text. Literature challenges lexical methods because narrative language focuses on immersive sensory details instead of abstractive description or exposition: writers are advised to *show, don’t tell*. We propose Retell, a simple, accessible topic modeling approach for literature. Here, we prompt resource-efficient, generative language models (LMs) to *tell* what passages *show*, thereby translating narratives’ surface forms into higher-level concepts and themes. By running LDA on LMs’ retellings of passages, we can obtain more precise and informative topics than by running LDA alone or by directly asking LMs to list topics. To investigate the potential of our method for cultural analytics, we compare our method’s outputs to expert-guided annotations in a case study on racial/cultural identity in high school English language arts books.
2018
Detecting Institutional Dialog Acts in Police Traffic Stops
Vinodkumar Prabhakaran
|
Camilla Griffiths
|
Hang Su
|
Prateek Verma
|
Nelson Morgan
|
Jennifer L. Eberhardt
|
Dan Jurafsky
Transactions of the Association for Computational Linguistics, Volume 6
We apply computational dialog methods to police body-worn camera footage to model conversations between police officers and community members in traffic stops. Relying on the theory of institutional talk, we develop a labeling scheme for police speech during traffic stops, and a tagger to detect institutional dialog acts (Reasons, Searches, Offering Help) from transcribed text at the turn (78% F-score) and stop (89% F-score) level. We then develop speech recognition and segmentation algorithms to detect these acts at the stop level from raw camera audio (81% F-score, with even higher accuracy for crucial acts like conveying the reason for the stop). We demonstrate that the dialog structures produced by our tagger could reveal whether officers follow law enforcement norms like introducing themselves, explaining the reason for the stop, and asking permission for searches. This work may therefore inform and aid efforts to ensure the procedural justice of police-community interactions.
Search
Fix author
Co-authors
- Camilla Griffiths 2
- David Bamman 1
- Dorottya Demszky 1
- Dan Jurafsky 1
- Sarah Levine 1
- show all...