Properties and Challenges of LLM-Generated Explanations

Jenny Kunz, Marco Kuhlmann


Abstract
The self-rationalising capabilities of large language models (LLMs) have been explored in restricted settings, using task-specific data sets.However, current LLMs do not (only) rely on specifically annotated data; nonetheless, they frequently explain their outputs.The properties of the generated explanations are influenced by the pre-training corpus and by the target data used for instruction fine-tuning.As the pre-training corpus includes a large amount of human-written explanations “in the wild”, we hypothesise that LLMs adopt common properties of human explanations.By analysing the outputs for a multi-domain instruction fine-tuning data set, we find that generated explanations show selectivity and contain illustrative elements, but less frequently are subjective or misleading.We discuss reasons and consequences of the properties’ presence or absence. In particular, we outline positive and negative implications depending on the goals and user groups of the self-rationalising system.
Anthology ID:
2024.hcinlp-1.2
Volume:
Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing
Month:
June
Year:
2024
Address:
Mexico City, Mexico
Editors:
Su Lin Blodgett, Amanda Cercas Curry, Sunipa Dey, Michael Madaio, Ani Nenkova, Diyi Yang, Ziang Xiao
Venues:
HCINLP | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13–27
Language:
URL:
https://aclanthology.org/2024.hcinlp-1.2
DOI:
Bibkey:
Cite (ACL):
Jenny Kunz and Marco Kuhlmann. 2024. Properties and Challenges of LLM-Generated Explanations. In Proceedings of the Third Workshop on Bridging Human--Computer Interaction and Natural Language Processing, pages 13–27, Mexico City, Mexico. Association for Computational Linguistics.
Cite (Informal):
Properties and Challenges of LLM-Generated Explanations (Kunz & Kuhlmann, HCINLP-WS 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/revert-3132-ingestion-checklist/2024.hcinlp-1.2.pdf