Abstract
In the context of the ReproHum project aimed at assessing the reliability of human evaluation, we replicated the human evaluation conducted in “Generating Scientific Definitions with Controllable Complexity” by August et al. (2022). Specifically, humans were asked to assess the fluency of automatically generated scientific definitions by three different models, with output complexity varying according to target audience. Evaluation conditions were kept as close as possible to the original study, except of necessary and minor adjustments. Our results, despite yielding lower absolute performance, show that relative performance across the three tested systems remains comparable to what was observed in the original paper. On the basis of lower inter-annotator agreement and feedback received from annotators in our experiment, we also observe that the ambiguity of the concept being evaluated may play a substantial role in human assessment.- Anthology ID:
- 2024.humeval-1.21
- Volume:
- Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024
- Month:
- May
- Year:
- 2024
- Address:
- Torino, Italia
- Editors:
- Simone Balloccu, Anya Belz, Rudali Huidrom, Ehud Reiter, Joao Sedoc, Craig Thomson
- Venues:
- HumEval | WS
- SIG:
- Publisher:
- ELRA and ICCL
- Note:
- Pages:
- 238–249
- Language:
- URL:
- https://aclanthology.org/2024.humeval-1.21
- DOI:
- Cite (ACL):
- Yiru Li, Huiyuan Lai, Antonio Toral, and Malvina Nissim. 2024. ReproHum #0033-3: Comparable Relative Results with Lower Absolute Values in a Reproduction Study. In Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024, pages 238–249, Torino, Italia. ELRA and ICCL.
- Cite (Informal):
- ReproHum #0033-3: Comparable Relative Results with Lower Absolute Values in a Reproduction Study (Li et al., HumEval-WS 2024)
- PDF:
- https://preview.aclanthology.org/add_acl24_videos/2024.humeval-1.21.pdf