Bohan Jiang
2025
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li
|
Bohan Jiang
|
Liangjie Huang
|
Alimohammad Beigi
|
Chengshuai Zhao
|
Zhen Tan
|
Amrita Bhattacharjee
|
Yuxuan Jiang
|
Canyu Chen
|
Tianhao Wu
|
Kai Shu
|
Lu Cheng
|
Huan Liu
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Assessment and evaluation have long been critical challenges in artificial intelligence (AI) and natural language processing (NLP). Traditional methods, usually matching-based or small model-based, often fall short in open-ended and dynamic scenarios. Recent advancements in Large Language Models (LLMs) inspire the “LLM-as-a-judge” paradigm, where LLMs are leveraged to perform scoring, ranking, or selection for various machine learning evaluation scenarios. This paper presents a comprehensive survey of LLM-based judgment and assessment, offering an in-depth overview to review this evolving field. We first provide the definition from both input and output perspectives. Then we introduce a systematic taxonomy to explore LLM-as-a-judge along three dimensions: what to judge, how to judge, and how to benchmark. Finally, we also highlight key challenges and promising future directions for this emerging area.
2024
Large Language Models for Data Annotation and Synthesis: A Survey
Zhen Tan
|
Dawei Li
|
Song Wang
|
Alimohammad Beigi
|
Bohan Jiang
|
Amrita Bhattacharjee
|
Mansooreh Karami
|
Jundong Li
|
Lu Cheng
|
Huan Liu
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Data annotation and synthesis generally refers to the labeling or generating of raw data with relevant information, which could be used for improving the efficacy of machine learning models. The process, however, is labor-intensive and costly. The emergence of advanced Large Language Models (LLMs), exemplified by GPT-4, presents an unprecedented opportunity to automate the complicated process of data annotation and synthesis. While existing surveys have extensively covered LLM architecture, training, and general applications, we uniquely focus on their specific utility for data annotation. This survey contributes to three core aspects: LLM-Based Annotation Generation, LLM-Generated Annotations Assessment, and LLM-Generated Annotations Utilization. Furthermore, this survey includes an in-depth taxonomy of data types that LLMs can annotate, a comprehensive review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation and synthesis. Serving as a key guide, this survey aims to assist researchers and practitioners in exploring the potential of the latest LLMs for data annotation, thereby fostering future advancements in this critical field.
Search
Fix author
Co-authors
- Alimohammad Beigi 2
- Amrita Bhattacharjee 2
- Lu Cheng 2
- Dawei Li 2
- Huan Liu (刘欢) 2
- show all...