Dora Zhao
2025
SPHERE: An Evaluation Card for Human-AI Systems
Dora Zhao | Qianou Ma | Xinran Zhao | Chenglei Si | Chenyang Yang | Ryan Louie | Ehud Reiter | Diyi Yang | Tongshuang Wu
Findings of the Association for Computational Linguistics: ACL 2025
Dora Zhao | Qianou Ma | Xinran Zhao | Chenglei Si | Chenyang Yang | Ryan Louie | Ehud Reiter | Diyi Yang | Tongshuang Wu
Findings of the Association for Computational Linguistics: ACL 2025
In the era of Large Language Models (LLMs), establishing effective evaluation methods and standards for diverse human-AI interaction systems is increasingly challenging. To encourage more transparent documentation and facilitate discussion on human-AI system evaluation design options, we present an evaluation card SPHERE, which encompasses five key dimensions: 1) What is being evaluated?; 2) How is the evaluation conducted?; 3) Who is participating in the evaluation?; 4) When is evaluation conducted?; 5) How is evaluation validated? We conduct a review of 39 human-AI systems using SPHERE, outlining current evaluation practices and areas for improvement. We provide three recommendations for improving the validity and rigor of evaluation practices.
2024
Resampled Datasets Are Not Enough: Mitigating Societal Bias Beyond Single Attributes
Yusuke Hirota | Jerone Andrews | Dora Zhao | Orestis Papakyriakopoulos | Apostolos Modas | Yuta Nakashima | Alice Xiang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Yusuke Hirota | Jerone Andrews | Dora Zhao | Orestis Papakyriakopoulos | Apostolos Modas | Yuta Nakashima | Alice Xiang
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
We tackle societal bias in image-text datasets by removing spurious correlations between protected groups and image attributes. Traditional methods only target labeled attributes, ignoring biases from unlabeled ones. Using text-guided inpainting models, our approach ensures protected group independence from all attributes and mitigates inpainting biases through data filtering. Evaluations on multi-label image classification and image captioning tasks show our method effectively reduces bias without compromising performance across various models. Specifically, we achieve an average societal bias reduction of 46.1% in leakage-based bias metrics for multi-label classification and 74.8% for image captioning.