Abstract
Automatic summarization research has traditionally focused on providing high quality general-purpose summaries of documents. However, there are many applications which require more specific summaries, such as supporting question answering or topic-based literature discovery. In this paper we study the problem of conditional summarization in which content selection and surface realization are explicitly conditioned on an ad-hoc natural language question or topic description. Because of the difficulty in obtaining sufficient reference summaries to support arbitrary conditional summarization, we explore the use of multi-task fine-tuning (MTFT) on twenty-one natural language tasks to enable zero-shot conditional summarization on five tasks. We present four new summarization datasets, two novel “online” or adaptive task-mixing strategies, and report zero-shot performance using T5 and BART, demonstrating that MTFT can improve zero-shot summarization quality.- Anthology ID:
- 2020.findings-emnlp.289
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2020
- Month:
- November
- Year:
- 2020
- Address:
- Online
- Editors:
- Trevor Cohn, Yulan He, Yang Liu
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3215–3226
- Language:
- URL:
- https://preview.aclanthology.org/icon-24-ingestion/2020.findings-emnlp.289/
- DOI:
- 10.18653/v1/2020.findings-emnlp.289
- Cite (ACL):
- Travis Goodwin, Max Savery, and Dina Demner-Fushman. 2020. Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 3215–3226, Online. Association for Computational Linguistics.
- Cite (Informal):
- Towards Zero-Shot Conditional Summarization with Adaptive Multi-Task Fine-Tuning (Goodwin et al., Findings 2020)
- PDF:
- https://preview.aclanthology.org/icon-24-ingestion/2020.findings-emnlp.289.pdf
- Code
- h4ste/mtft_zsl
- Data
- BioASQ, COPA, CosmosQA, MC-TACO, SQuAD