Abstract
We introduce TLDR generation, a new form of extreme summarization, for scientific papers. TLDR generation involves high source compression and requires expert background knowledge and understanding of complex domain-specific language. To facilitate study on this task, we introduce SCITLDR, a new multi-target dataset of 5.4K TLDRs over 3.2K papers. SCITLDR contains both author-written and expert-derived TLDRs, where the latter are collected using a novel annotation protocol that produces high-quality summaries while minimizing annotation burden. We propose CATTS, a simple yet effective learning strategy for generating TLDRs that exploits titles as an auxiliary training signal. CATTS improves upon strong baselines under both automated metrics and human evaluations. Data and code are publicly available at https://github.com/allenai/scitldr.- Anthology ID:
- 2020.findings-emnlp.428
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 4766–4777
- Language:
- URL:
- https://aclanthology.org/2020.findings-emnlp.428
- DOI:
- 10.18653/v1/2020.findings-emnlp.428
- Cite (ACL):
- Isabel Cachola, Kyle Lo, Arman Cohan, and Daniel Weld. 2020. TLDR: Extreme Summarization of Scientific Documents. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 4766–4777, Online. Association for Computational Linguistics.
- Cite (Informal):
- TLDR: Extreme Summarization of Scientific Documents (Cachola et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/naacl24-info/2020.findings-emnlp.428.pdf
- Code
- allenai/scitldr + additional community code
- Data
- SciTLDR, BigPatent, NEWSROOM, S2ORC, ScisummNet, TalkSumm