Abstract
This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework called LIME. Our study shows that self-reported rating of NLG explanation was higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clear-cut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.- Anthology ID:
- W18-6522
- Volume:
- Proceedings of the 11th International Conference on Natural Language Generation
- Month:
- November
- Year:
- 2018
- Address:
- Tilburg University, The Netherlands
- Editors:
- Emiel Krahmer, Albert Gatt, Martijn Goudbeek
- Venue:
- INLG
- SIG:
- SIGGEN
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 177–182
- Language:
- URL:
- https://aclanthology.org/W18-6522
- DOI:
- 10.18653/v1/W18-6522
- Cite (ACL):
- James Forrest, Somayajulu Sripada, Wei Pang, and George Coghill. 2018. Towards making NLG a voice for interpretable Machine Learning. In Proceedings of the 11th International Conference on Natural Language Generation, pages 177–182, Tilburg University, The Netherlands. Association for Computational Linguistics.
- Cite (Informal):
- Towards making NLG a voice for interpretable Machine Learning (Forrest et al., INLG 2018)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/W18-6522.pdf