Abstract
In this paper, we tack lay summarization tasks, which aim to automatically produce lay summaries for scientific papers, to participate in the first CL-LaySumm 2020 in SDP workshop at EMNLP 2020. We present our approach of using Pre-training with Extracted Gap-sentences for Abstractive Summarization (PEGASUS; Zhang et al., 2019b) to produce the lay summary and combining those with the extractive summarization model using Bidirectional Encoder Representations from Transformers (BERT; Devlin et al., 2018) and readability metrics that measure the readability of the sentence to further improve the quality of the summary. Our model achieves a remarkable performance on ROUGE metrics, demonstrating the produced summary is more readable while it summarizes the main points of the document.- Anthology ID:
- 2020.sdp-1.38
- Volume:
- Proceedings of the First Workshop on Scholarly Document Processing
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Venue:
- sdp
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 328–335
- Language:
- URL:
- https://aclanthology.org/2020.sdp-1.38
- DOI:
- 10.18653/v1/2020.sdp-1.38
- Cite (ACL):
- Seungwon Kim. 2020. Using Pre-Trained Transformer for Better Lay Summarization. In Proceedings of the First Workshop on Scholarly Document Processing, pages 328–335, Online. Association for Computational Linguistics.
- Cite (Informal):
- Using Pre-Trained Transformer for Better Lay Summarization (Kim, sdp 2020)
- PDF:
- https://preview.aclanthology.org/remove-xml-comments/2020.sdp-1.38.pdf