What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment

Matthew Finlayson, Kyle Richardson, Ashish Sabharwal, Peter Clark


Abstract
The instruction learning paradigm—where a model learns to perform new tasks from task descriptions alone—has become popular in research on general-purpose models. The capabilities of large transformer models as instruction learners, however, remain poorly understood. We use a controlled synthetic environment to characterize such capabilities. Specifically, we use the task of deciding whether a given string matches a regular expression (viewed as an instruction) to identify properties of tasks, instructions, and instances that make instruction learning challenging. For instance, we find that our model, a fine-tuned T5-based text2text transformer, struggles with large regular languages, suggesting that less precise instructions are challenging for models. Instruction executions that require tracking longer contexts of prior steps are also difficult. We use our findings to systematically construct a challenging instruction learning dataset, which we call Hard RegSet. Fine-tuning on Hard RegSet, our large transformer learns to correctly interpret (with at least 90% accuracy) only 65.6% of test instructions, and 11%-24% of the instructions in out-of-distribution generalization settings. We thus propose Hard RegSet as a challenging instruction learning dataset, and a controlled environment for studying instruction learning.
Anthology ID:
2022.emnlp-main.27
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
414–426
Language:
URL:
https://aclanthology.org/2022.emnlp-main.27
DOI:
10.18653/v1/2022.emnlp-main.27
Bibkey:
Cite (ACL):
Matthew Finlayson, Kyle Richardson, Ashish Sabharwal, and Peter Clark. 2022. What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 414–426, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
What Makes Instruction Learning Hard? An Investigation and a New Challenge in a Synthetic Environment (Finlayson et al., EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.emnlp-main.27.pdf