@inproceedings{li-etal-2024-reprohum,
    title = "{R}epro{H}um {\#}0033-3: Comparable Relative Results with Lower Absolute Values in a Reproduction Study",
    author = "Li, Yiru  and
      Lai, Huiyuan  and
      Toral, Antonio  and
      Nissim, Malvina",
    editor = "Balloccu, Simone  and
      Belz, Anya  and
      Huidrom, Rudali  and
      Reiter, Ehud  and
      Sedoc, Joao  and
      Thomson, Craig",
    booktitle = "Proceedings of the Fourth Workshop on Human Evaluation of NLP Systems (HumEval) @ LREC-COLING 2024",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.humeval-1.21/",
    pages = "238--249",
    abstract = "In the context of the ReproHum project aimed at assessing the reliability of human evaluation, we replicated the human evaluation conducted in ``Generating Scientific Definitions with Controllable Complexity'' by August et al. (2022). Specifically, humans were asked to assess the fluency of automatically generated scientific definitions by three different models, with output complexity varying according to target audience. Evaluation conditions were kept as close as possible to the original study, except of necessary and minor adjustments. Our results, despite yielding lower absolute performance, show that relative performance across the three tested systems remains comparable to what was observed in the original paper. On the basis of lower inter-annotator agreement and feedback received from annotators in our experiment, we also observe that the ambiguity of the concept being evaluated may play a substantial role in human assessment."
}Markdown (Informal)
[ReproHum #0033-3: Comparable Relative Results with Lower Absolute Values in a Reproduction Study](https://preview.aclanthology.org/ingest-emnlp/2024.humeval-1.21/) (Li et al., HumEval 2024)
ACL