Proceedings of the 17th International Natural Language Generation Conference: Tutorial Abstract

Anya Belz, João Sedoc, Craig Thomson, Simon Mille, Rudali Huidrom (Editors)



pdf bib
Proceedings of the 17th International Natural Language Generation Conference: Tutorial Abstract
Anya Belz | João Sedoc | Craig Thomson | Simon Mille | Rudali Huidrom

pdf bib
The INLG 2024 Tutorial on Human Evaluation of NLP System Quality: Background, Overall Aims, and Summaries of Taught Units
Anya Belz | João Sedoc | Craig Thomson | Simon Mille | Rudali Huidrom

Following numerous calls in the literature for improved practices and standardisation in human evaluation in Natural Language Processing over the past ten years, we held a tutorial on the topic at the 2024 INLG Conference. The tutorial addressed the structure, development, design, implementation, execution and analysis of human evaluations of NLP system quality. Hands-on practical sessions were run, designed to facilitate assimilation of the material presented. Slides, lecture recordings, code and data have been made available on GitHub (https://github.com/Human-Evaluation-Tutorial/INLG-2024-Tutorial). In this paper, we provide summaries of the content of the eight units of the tutorial, alongside its research context and aims.