Susan E. Brennan

Also published as: Susan Brennan


2025

pdf bib
LVLMs are Bad at Overhearing Human Referential Communication
Zhengxiang Wang | Weiling Li | Panagiotis Kaliosis | Owen Rambow | Susan Brennan
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing

During spontaneous conversations, speakers collaborate on novel referring expressions, which they can then re-use in subsequent conversations. Understanding such referring expressions is an important ability for an embodied agent, so that it can carry out tasks in the real world. This requires integrating and understanding language, vision, and conversational interaction. We study the capabilities of seven state-of-the-art Large Vision Language Models (LVLMs) as overhearers to a corpus of spontaneous conversations between pairs of human discourse participants engaged in a collaborative object-matching task. We find that such a task remains challenging for current LVLMs and they all fail to show a consistent performance improvement as they overhear more conversations from the same discourse participants repeating the same task for multiple rounds. We release our corpus and code for reproducibility and to facilitate future research.

2024

pdf bib
Training LLMs to Recognize Hedges in Dialogues about Roadrunner Cartoons
Amie Paige | Adil Soubki | John Murzaku | Owen Rambow | Susan E. Brennan
Proceedings of the 25th Annual Meeting of the Special Interest Group on Discourse and Dialogue

Hedges allow speakers to mark utterances as provisional, whether to signal non-prototypicality or “fuzziness”, to indicate a lack of commitment to an utterance, to attribute responsibility for a statement to someone else, to invite input from a partner, or to soften critical feedback in the service of face management needs. Here we focus on hedges in an experimentally parameterized corpus of 63 Roadrunner cartoon narratives spontaneously produced from memory by 21 speakers for co-present addressees, transcribed to text (Galati and Brennan, 2010). We created a gold standard of hedges annotated by human coders (the Roadrunner-Hedge corpus) and compared three LLM-based approaches for hedge detection: fine-tuning BERT, and zero and few-shot prompting with GPT-4o and LLaMA-3. The best-performing approach was a fine-tuned BERT model, followed by few-shot GPT-4o. After an error analysis on the top performing approaches, we used an LLM-in-the-Loop approach to improve the gold standard coding, as well as to highlight cases in which hedges are ambiguous in linguistically interesting ways that will guide future research. This is the first step in our research program to train LLMs to interpret and generate collateral signals appropriately and meaningfully in conversation.

2016

pdf bib
Keynote - More than meets the ear: Processes that shape dialogue
Susan Brennan
Proceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue

2000

pdf bib
Invited Talk: Processes that Shape Conversation and their Implications for Computational Linguistics
Susan E. Brennan
Proceedings of the 38th Annual Meeting of the Association for Computational Linguistics

1988

pdf bib
The Multimedia Articulation of Answers in a Natural Language Database Query System
Susan E. Brennan
Second Conference on Applied Natural Language Processing

1987

pdf bib
A Centering Approach to Pronouns
Susan E. Brennan | Marilyn W. Friedman | Carl J. Pollard
25th Annual Meeting of the Association for Computational Linguistics