Abstract
Explanation generation introduced as the world tree corpus (Jansen et al., 2018) is an emerging NLP task involving multi-hop inference for explaining the correct answer in multiple-choice QA. It is a challenging task evidenced by low state-of-the-art performances(below 60% in F-score) demonstrated on the task. Of the state-of-the-art approaches, fine-tuned transformer-based (Vaswani et al., 2017) BERT models have shown great promise toward continued system performance improvements compared with approaches relying on surface-level cues alone that demonstrate performance saturation. In this work, we take a novel direction by addressing a particular linguistic characteristic of the data — we introduce a novel and lightweight focus feature in the transformer-based model and examine task improvements. Our evaluations reveal a significantly positive impact of this lightweight focus feature achieving the highest scores, second only to a significantly computationally intensive system.- Anthology ID:
- 2020.starsem-1.13
- Volume:
- Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics
- Month:
- December
- Year:
- 2020
- Address:
- Barcelona, Spain (Online)
- Venue:
- *SEM
- SIGs:
- SIGLEX | SIGSEM
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 125–130
- Language:
- URL:
- https://aclanthology.org/2020.starsem-1.13
- DOI:
- Cite (ACL):
- Isaiah Onando Mulang’, Jennifer D’Souza, and Sören Auer. 2020. Fine-tuning BERT with Focus Words for Explanation Regeneration. In Proceedings of the Ninth Joint Conference on Lexical and Computational Semantics, pages 125–130, Barcelona, Spain (Online). Association for Computational Linguistics.
- Cite (Informal):
- Fine-tuning BERT with Focus Words for Explanation Regeneration (Mulang’ et al., *SEM 2020)
- PDF:
- https://preview.aclanthology.org/ingestion-script-update/2020.starsem-1.13.pdf
- Data
- Worldtree