Improving Abstractive Summarization with Commonsense Knowledge

Pranav Nair, Anil Kumar Singh


Abstract
Large scale pretrained models have demonstrated strong performances on several natural language generation and understanding benchmarks. However, introducing commonsense into them to generate more realistic text remains a challenge. Inspired from previous work on commonsense knowledge generation and generative commonsense reasoning, we introduce two methods to add commonsense reasoning skills and knowledge into abstractive summarization models. Both methods beat the baseline on ROUGE scores, demonstrating the superiority of our models over the baseline. Human evaluation results suggest that summaries generated by our methods are more realistic and have fewer commonsensical errors.
Anthology ID:
2021.ranlp-srw.19
Volume:
Proceedings of the Student Research Workshop Associated with RANLP 2021
Month:
September
Year:
2021
Address:
Online
Editors:
Souhila Djabri, Dinara Gimadi, Tsvetomila Mihaylova, Ivelina Nikolova-Koleva
Venue:
RANLP
SIG:
Publisher:
INCOMA Ltd.
Note:
Pages:
135–143
Language:
URL:
https://aclanthology.org/2021.ranlp-srw.19
DOI:
Bibkey:
Cite (ACL):
Pranav Nair and Anil Kumar Singh. 2021. Improving Abstractive Summarization with Commonsense Knowledge. In Proceedings of the Student Research Workshop Associated with RANLP 2021, pages 135–143, Online. INCOMA Ltd..
Cite (Informal):
Improving Abstractive Summarization with Commonsense Knowledge (Nair & Singh, RANLP 2021)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2021.ranlp-srw.19.pdf
Data
CommonGenConceptNet