Abstract
Recent research indicates that large language models (LLMs) possess a certain degree of script planning capability. However, there is still a lack of focused work on evaluating scripts generated by LLMs. The evaluation of scripts poses challenges due to their logical structure, sequential organization, adherence to commonsense constraints, and open-endedness. In this work, We introduced a novel script evaluation dataset, MCScript, consisting of more than 1,500 script evaluation tasks and steps, and developed an agent-based script evaluation framework, ABSEval, to collaboratively evaluate scripts generated by LLMs. Our experiments demonstrate that ABSEval provides superior accuracy and relevance, aligning closely with human evaluation. We evaluated the script planning capabilities of 15 mainstream LLMs and provided a detailed analysis. Furthermore, we observed phenomena like the key factor influencing the script planning ability of LLM is not parameter size and suggested improvements for evaluating open-ended questions.- Anthology ID:
- 2024.emnlp-main.691
- Volume:
- Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- EMNLP
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 12418–12434
- Language:
- URL:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.emnlp-main.691/
- DOI:
- 10.18653/v1/2024.emnlp-main.691
- Cite (ACL):
- Sirui Liang, Baoli Zhang, Jun Zhao, and Kang Liu. 2024. ABSEval: An Agent-based Framework for Script Evaluation. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 12418–12434, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- ABSEval: An Agent-based Framework for Script Evaluation (Liang et al., EMNLP 2024)
- PDF:
- https://preview.aclanthology.org/build-pipeline-with-new-library/2024.emnlp-main.691.pdf