Brent Hecht
2024
Interpretable User Satisfaction Estimation for Conversational Systems with Large Language Models
Ying-Chun Lin
|
Jennifer Neville
|
Jack Stokes
|
Longqi Yang
|
Tara Safavi
|
Mengting Wan
|
Scott Counts
|
Siddharth Suri
|
Reid Andersen
|
Xiaofeng Xu
|
Deepak Gupta
|
Sujay Kumar Jauhar
|
Xia Song
|
Georg Buscher
|
Saurabh Tiwary
|
Brent Hecht
|
Jaime Teevan
Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Accurate and interpretable user satisfaction estimation (USE) is critical for understanding, evaluating, and continuously improving conversational systems. Users express their satisfaction or dissatisfaction with diverse conversational patterns in both general-purpose (ChatGPT and Bing Copilot) and task-oriented (customer service chatbot) conversational systems. Existing approaches based on featurized ML models or text embeddings fall short in extracting generalizable patterns and are hard to interpret. In this work, we show that LLMs can extract interpretable signals of user satisfaction from their natural language utterances more effectively than embedding-based approaches. Moreover, an LLM can be tailored for USE via an iterative prompting framework using supervision from labeled examples. Our proposed method, Supervised Prompting for User satisfaction Rubrics (SPUR), not only has higher accuracy but is more interpretable as it scores user satisfaction via learned rubrics with a detailed breakdown.
Search
Co-authors
- Deepak Gupta 1
- Georg Buscher 1
- Jack Stokes 1
- Jaime Teevan 1
- Jennifer Neville 1
- show all...
Venues
- acl1