Krish Veera
2025
MuseScorer: Idea Originality Scoring At Scale
Ali Sarosh Bangash
|
Krish Veera
|
Ishfat Abrar Islam
|
Raiyan Abdul Baten
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
An objective, face-valid method for scoring idea originality is to measure each idea’s statistical infrequency within a population—an approach long used in creativity research. Yet, computing these frequencies requires manually bucketing idea rephrasings, a process that is subjective, labor-intensive, error-prone, and brittle at scale. We introduce MuseScorer, a fully automated, psychometrically validated system for frequency-based originality scoring. MuseScorer integrates a Large Language Model (LLM) with externally orchestrated retrieval: given a new idea, it retrieves semantically similar prior idea-buckets and zero-shot prompts the LLM to judge whether the idea fits an existing bucket or forms a new one. These buckets enable frequency-based originality scoring without human annotation. Across five datasets (Nparticipants=1143, nideas=16,294), MuseScorer matches human annotators in idea clustering structure (AMI =0.59) and participant-level scoring (r = 0.89), while demonstrating strong convergent and external validity. The system enables scalable, intent-sensitive, and human-aligned originality assessment for creativity research.