Abstract
Instructions augmentation is a crucial step for unleashing the full potential of large language models (LLMs) in downstream tasks. Existing Self-Instruct methods primarily simulate new instructions from a few initial instructions with in-context learning. However, our study identifies a critical flaw in this approach: even with GPT4o, it cannot generate complex instructions of length ≥ 100, which is necessary in complex tasks such as code completion.To address this issue, our key insight is that fine-tuning open source LLMs with only ten examples can produce complex instructions that maintain distributional consistency for complex reasoning tasks. We introduce Ada-Instruct, an adaptive instruction generator developed through fine-tuning. We empirically validated Ada-Instruct’s efficacy across different applications. The results highlight Ada-Instruct’s capacity to generate long, intricate, and distributionally consistent instructions.- Anthology ID:
- 2024.findings-emnlp.409
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 6967–6984
- Language:
- URL:
- https://preview.aclanthology.org/add_missing_videos/2024.findings-emnlp.409/
- DOI:
- 10.18653/v1/2024.findings-emnlp.409
- Cite (ACL):
- Wanyun Cui and Qianle Wang. 2024. Ada-Instruct: Adapting Instruction Generators for Complex Reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 6967–6984, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- Ada-Instruct: Adapting Instruction Generators for Complex Reasoning (Cui & Wang, Findings 2024)
- PDF:
- https://preview.aclanthology.org/add_missing_videos/2024.findings-emnlp.409.pdf