IDEAlign: Comparing Ideas of Large Language Models to Domain Experts

HyunJi Nam, Lucía Langlois, Jim Malamut, Mei Tan, Dorottya Demszky


Abstract
Large language models (LLMs) are increasingly used to produce open-ended, interpretive annotations, yet there is no validated, scalable measure of ***idea-level similarity*** to expert annotations. We (i) introduce the content evaluation of LLM annotations as a core, understudied task, (ii) propose IDEAlign for capturing expert similarity judgments via the odd-one-out tasks, and (iii) benchmark various similarity methods, such as text embeddings, topic models, and LLM-as-a-judge, against these human ratings. Applying this approach to two real-world educational datasets (interpreting math reasoning and feedback generation), we find that most metrics fail to capture the nuanced dimensions of similarity meaningful to experts. LLM-as-a-judge performs best (11–18% improvement over other methods) but still falls short of expert alignment, making it useful as a triage filter rather than a substitute for human review. Our work demonstrates the difficulty of evaluating open-ended LLM annotations at scale, and positions IDEAlign as a reusable protocol for benchmarking on this task, thereby informing responsible deployment of LLMs.
Anthology ID:
2026.eacl-long.182
Volume:
Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
March
Year:
2026
Address:
Rabat, Morocco
Editors:
Vera Demberg, Kentaro Inui, Lluís Marquez
Venue:
EACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3908–3925
Language:
URL:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.182/
DOI:
Bibkey:
Cite (ACL):
HyunJi Nam, Lucía Langlois, Jim Malamut, Mei Tan, and Dorottya Demszky. 2026. IDEAlign: Comparing Ideas of Large Language Models to Domain Experts. In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3908–3925, Rabat, Morocco. Association for Computational Linguistics.
Cite (Informal):
IDEAlign: Comparing Ideas of Large Language Models to Domain Experts (Nam et al., EACL 2026)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-eacl/2026.eacl-long.182.pdf