@inproceedings{feng-boyd-graber-2022-learning,
    title = "Learning to Explain Selectively: A Case Study on Question Answering",
    author = "Feng, Shi  and
      Boyd-Graber, Jordan",
    editor = "Goldberg, Yoav  and
      Kozareva, Zornitsa  and
      Zhang, Yue",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.emnlp-main.573/",
    doi = "10.18653/v1/2022.emnlp-main.573",
    pages = "8372--8382",
    abstract = "Explanations promise to bridge the gap between humans and AI, yet it remains difficult to achieve consistent improvement in AI-augmented human decision making. The usefulness of AI explanations depends on many factors, and always showing the same type of explanation in all cases is suboptimal{---}so is relying on heuristics to adapt explanations for each scenario. We propose learning to explain{''}selectively'': for each decision that the user makes, we use a model to choose the best explanation from a set of candidates and update this model with feedback to optimize human performance. We experiment on a question answering task, Quizbowl, and show that selective explanations improve human performance for both experts and crowdworkers."
}Markdown (Informal)
[Learning to Explain Selectively: A Case Study on Question Answering](https://preview.aclanthology.org/ingest-emnlp/2022.emnlp-main.573/) (Feng & Boyd-Graber, EMNLP 2022)
ACL