Abstract
Recent works suggest that transformer models are capable of multi-tasking on diverse NLP tasks and adapt to new tasks efficiently. However, the potential of these multi-task models may be limited as they use the same set of parameters for all tasks. In contrast, humans tackle tasks in a more flexible way, by making proper presumptions on what skills and knowledge are relevant and executing only the necessary computations. Inspired by this, we propose to use task-level mixture-of-expert models, which has a collection of transformer layers (i.e., experts) and a router component to choose among these experts dynamically and flexibly. We find that these models help improve the average performance gain (ARG) metric by 2.6% when adapting to unseen tasks in few-shot settings, and by 5.6% in zero-shot generalization settings. Further, we show that the learned routing decisions and experts partly rediscover human categorization of NLP tasks – certain experts are strongly associated with extractive tasks, some with classification tasks, and some with tasks requiring world knowledge.- Anthology ID:
- 2022.findings-emnlp.189
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2022
- Month:
- December
- Year:
- 2022
- Address:
- Abu Dhabi, United Arab Emirates
- Editors:
- Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 2567–2592
- Language:
- URL:
- https://aclanthology.org/2022.findings-emnlp.189
- DOI:
- 10.18653/v1/2022.findings-emnlp.189
- Cite (ACL):
- Qinyuan Ye, Juan Zha, and Xiang Ren. 2022. Eliciting and Understanding Cross-task Skills with Task-level Mixture-of-Experts. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 2567–2592, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
- Cite (Informal):
- Eliciting and Understanding Cross-task Skills with Task-level Mixture-of-Experts (Ye et al., Findings 2022)
- PDF:
- https://preview.aclanthology.org/corrections-2024-07/2022.findings-emnlp.189.pdf