Abstract
We consider two types of numeric representations for conveying the uncertainty of predictions made by Machine Learning (ML) models: confidence-based (e.g., “the AI is 90% confident”) and frequency-based (e.g., “the AI was correct in 180 (90%) out of 200 cases”). We conducted a user study to determine which factors influence users’ acceptance of predictions made by ML models, and how the two types of uncertainty representations affect users’ views about explanations. Our results show that users’ acceptance of ML model predictions depends mainly on the models’ confidence, and that explanations that include uncertainty information are deemed better in several respects than explanations that omit it, with frequency-based representations being deemed better than confidence-based representations.- Anthology ID:
- 2024.inlg-main.4
- Volume:
- Proceedings of the 17th International Natural Language Generation Conference
- Month:
- September
- Year:
- 2024
- Address:
- Tokyo, Japan
- Editors:
- Saad Mahamood, Nguyen Le Minh, Daphne Ippolito
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 30–46
- Language:
- URL:
- https://aclanthology.org/2024.inlg-main.4
- DOI:
- Cite (ACL):
- Ingrid Zukerman and Sameen Maruf. 2024. Communicating Uncertainty in Explanations of the Outcomes of Machine Learning Models. In Proceedings of the 17th International Natural Language Generation Conference, pages 30–46, Tokyo, Japan. Association for Computational Linguistics.
- Cite (Informal):
- Communicating Uncertainty in Explanations of the Outcomes of Machine Learning Models (Zukerman & Maruf, INLG 2024)
- PDF:
- https://preview.aclanthology.org/dois-2013-emnlp/2024.inlg-main.4.pdf