Jiwon Moon
2026
Don’t Judge Code by Its Cover: Exploring Biases in LLM Judges for Code Evaluation
Jiwon Moon | Yerin Hwang | Dongryeol Lee | Taegwan Kang | Yongil Kim | Kyomin Jung
Findings of the Association for Computational Linguistics: EACL 2026
Jiwon Moon | Yerin Hwang | Dongryeol Lee | Taegwan Kang | Yongil Kim | Kyomin Jung
Findings of the Association for Computational Linguistics: EACL 2026
With the growing use of large language models (LLMs) as evaluators, their application has expanded to code evaluation tasks, where they assess the correctness of generated code without relying on reference implementations. While this offers scalability and flexibility, it also raises a critical, unresolved question: Can LLM judges fairly and robustly evaluate semantically equivalent code with superficial variations? Functionally correct code often exhibits variations—such as differences in variable names, comments, or formatting—that should not influence its correctness. Yet, whether LLM judges can reliably handle these variations remains unclear. We present the first comprehensive study of this issue, defining six types of potential bias in code evaluation and revealing their systematic impact on LLM judges. Across five programming languages and multiple LLMs, we empirically demonstrate that all tested LLM judges are susceptible to both positive and negative biases, resulting in inflated or unfairly low scores. Moreover, we observe that LLM judges remain vulnerable to these biases even when prompted to generate test cases before scoring, highlighting the need for more robust code evaluation method.
2025
LimaCost: Data Valuation for Instruction Tuning of Large Language Models
Hyeonseok Moon | Jaehyung Seo | Seonmin Koo | Jinsung Kim | Young-kyoung Ham | Jiwon Moon | Heuiseok Lim
Findings of the Association for Computational Linguistics: EMNLP 2025
Hyeonseok Moon | Jaehyung Seo | Seonmin Koo | Jinsung Kim | Young-kyoung Ham | Jiwon Moon | Heuiseok Lim
Findings of the Association for Computational Linguistics: EMNLP 2025
Instruction tuning (IT) is an effective approach for aligning large language models (LLMs) with human intentions. There is ongoing discourse regarding the data quality for IT. As an effort to find the robust criteria of data quality for IT, we introduce LimaCost, a data quality measure that exhibits a strong correlation with model performance. LimaCost utilizes LIMA dataset, which effectiveness in IT has already been validated by several previous works. LimaCost then estimates the value of a given data by estimating how many LIMA data points might be needed to approximate its gradient. Our experiments reveal that LimaCost enables effective data selection that derive high alignment performance. We demonstrate that selecting data based on high LimaCost proves to be more effective than existing data selection strategies.