Kijeong Jeon
2025
FQ-Eval: Building Evaluation Dataset for User-centered Follow-up Question Generation
Sanghyun Seo
|
Bumsoo Kang
|
Dahm Lee
|
Jaeheon Kim
|
Joongbo Shin
|
Eui Soon Kim
|
Kijeong Jeon
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
To effectively support users’ goal achievement in chat-LLM services, providing user-centered follow-up questions is essential. Existing studies primarily focus on enhancing information-seeking or topical relevance, often missing how follow-up questions could satisfy users’ intrinsic needs and conversational goals. To bridge this gap, we introduce FQ-Eval, a user-centered evaluation dataset designed for assessing follow-up question generation in chat-LLM services. FQ-Eval incorporates realistic chat-LLM usage scenarios and five distinct human-aligned criteria, each reflecting user expectations of effective follow-up questions. Experimental results show that FQ-Eval constructed through our approach clearly capture human-aligned criteria, enabling robust, human-aligned follow-up question generation evaluation of various models and services.
Search
Fix author
Co-authors
- Bumsoo Kang 1
- Jaeheon Kim 1
- Eui Soon Kim 1
- Dahm Lee 1
- Sanghyun Seo 1
- show all...