Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?

David Mueller, Nicholas Andrews, Mark Dredze


Abstract
Traditional multi-task learning architectures learn a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks. Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multi-task conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.
Anthology ID:
2022.findings-emnlp.206
Volume:
Findings of the Association for Computational Linguistics: EMNLP 2022
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2843–2858
Language:
URL:
https://aclanthology.org/2022.findings-emnlp.206
DOI:
10.18653/v1/2022.findings-emnlp.206
Bibkey:
Cite (ACL):
David Mueller, Nicholas Andrews, and Mark Dredze. 2022. Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2843–2858, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
Do Text-to-Text Multi-Task Learners Suffer from Task Conflict? (Mueller et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2022.findings-emnlp.206.pdf