@inproceedings{opitz-2024-schroedingers,
    title = "Schroedinger{'}s Threshold: When the {AUC} Doesn{'}t Predict Accuracy",
    author = "Opitz, Juri",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.1255/",
    pages = "14400--14406",
    abstract = "The Area Under Curve measure (AUC) seems apt to evaluate and compare diverse models, possibly without calibration. An important example of AUC application is the evaluation and benchmarking of models that predict faithfulness of generated text. But we show that the AUC yields an academic and optimistic notion of accuracy that can misalign with the actual accuracy observed in application, yielding significant changes in benchmark rankings. To paint a more realistic picture of downstream model performance (and prepare it for actual application), we explore different calibration modes, testing calibration data and method."
}Markdown (Informal)
[Schroedinger’s Threshold: When the AUC Doesn’t Predict Accuracy](https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.1255/) (Opitz, LREC-COLING 2024)
ACL