Transparent Reference-free Automated Evaluation of Open-Ended User Survey Responses
Subin An, Yugyeong Ji, Junyoung Kim, Heejin Kook, Yang Lu, Josh Seltzer
Abstract
Open-ended survey responses provide valuable insights in marketing research, but low-quality responses not only burden researchers with manual filtering but also risk leading to misleading conclusions, underscoring the need for effective evaluation. Existing automatic evaluation methods target LLM-generated text and inadequately assess human-written responses with their distinct characteristics. To address such characteristics, we propose a two-stage evaluation framework specifically designed for human survey responses. First, gibberish filtering removes nonsensical responses. Then, three dimensions—effort, relevance, and complete- ness—are evaluated using LLM capabilities, grounded in empirical analysis of real-world survey data. Validation on English and Korean datasets shows that our framework not only outperforms existing metrics but also demonstrates high practical applicability for real-world applications such as response quality prediction and response rejection, showing strong correlations with expert assessment.- Anthology ID:
- 2025.emnlp-industry.65
- Volume:
- Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
- Month:
- November
- Year:
- 2025
- Address:
- Suzhou (China)
- Editors:
- Saloni Potdar, Lina Rojas-Barahona, Sebastien Montella
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 963–982
- Language:
- URL:
- https://preview.aclanthology.org/ingest-luhme/2025.emnlp-industry.65/
- DOI:
- 10.18653/v1/2025.emnlp-industry.65
- Cite (ACL):
- Subin An, Yugyeong Ji, Junyoung Kim, Heejin Kook, Yang Lu, and Josh Seltzer. 2025. Transparent Reference-free Automated Evaluation of Open-Ended User Survey Responses. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 963–982, Suzhou (China). Association for Computational Linguistics.
- Cite (Informal):
- Transparent Reference-free Automated Evaluation of Open-Ended User Survey Responses (An et al., EMNLP 2025)
- PDF:
- https://preview.aclanthology.org/ingest-luhme/2025.emnlp-industry.65.pdf