Analyzing Multi-Task Learning for Abstractive Text Summarization

Frederic Thomas Kirstein, Jan Philip Wahle, Terry Ruas, Bela Gipp


Abstract
Despite the recent success of multi-task learning and pre-finetuning for natural language understanding, few works have studied the effects of task families on abstractive text summarization. Task families are a form of task grouping during the pre-finetuning stage to learn common skills, such as reading comprehension. To close this gap, we analyze the influence of multi-task learning strategies using task families for the English abstractive text summarization task. We group tasks into one of three strategies, i.e., sequential, simultaneous, and continual multi-task learning, and evaluate trained models through two downstream tasks. We find that certain combinations of task families (e.g., advanced reading comprehension and natural language inference) positively impact downstream performance. Further, we find that choice and combinations of task families influence downstream performance more than the training scheme, supporting the use of task families for abstractive text
Anthology ID:
2022.gem-1.5
Volume:
Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM)
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates (Hybrid)
Editors:
Antoine Bosselut, Khyathi Chandu, Kaustubh Dhole, Varun Gangal, Sebastian Gehrmann, Yacine Jernite, Jekaterina Novikova, Laura Perez-Beltrachini
Venue:
GEM
SIG:
SIGGEN
Publisher:
Association for Computational Linguistics
Note:
Pages:
54–77
Language:
URL:
https://aclanthology.org/2022.gem-1.5
DOI:
10.18653/v1/2022.gem-1.5
Bibkey:
Cite (ACL):
Frederic Thomas Kirstein, Jan Philip Wahle, Terry Ruas, and Bela Gipp. 2022. Analyzing Multi-Task Learning for Abstractive Text Summarization. In Proceedings of the 2nd Workshop on Natural Language Generation, Evaluation, and Metrics (GEM), pages 54–77, Abu Dhabi, United Arab Emirates (Hybrid). Association for Computational Linguistics.
Cite (Informal):
Analyzing Multi-Task Learning for Abstractive Text Summarization (Kirstein et al., GEM 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/improve-issue-templates/2022.gem-1.5.pdf
Video:
 https://preview.aclanthology.org/improve-issue-templates/2022.gem-1.5.mp4