Advancing Language Models through Instruction Tuning: Recent Progress and Challenges

Zhihan Zhang, Renze Lou, Fangkai Jiao, Wenpeng Yin, Meng Jiang


Abstract
The capability of following instructions is a key dimension for AI systems. Therefore, in NLP, instruction tuning – the process of training language models to follow natural language instructions – has become a fundamental component of the model development pipeline. This tutorial addresses three critical questions within the field: (1) What are the current focal points in instruction tuning research? (2) What are the best practices in training an instruction-following model? (3) What new challenges have emerged? To answer these questions, the tutorial presents a systematic overview of recent advances in instruction tuning. It covers different stages in model training: supervised fine-tuning, preference optimization, and reinforcement learning. It introduces scalable strategies for building high-quality instruction data, explores approaches for training autonomous AI agents that handle complex real-world tasks, and discusses common criteria for evaluating instruction-following models. The audience will gain a comprehensive understanding of cutting-edge trends in instruction tuning and insights into promising directions for future research.
Anthology ID:
2025.emnlp-tutorials.2
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Valentina Pyatkin, Andreas Vlachos
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
4–6
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.2/
DOI:
Bibkey:
Cite (ACL):
Zhihan Zhang, Renze Lou, Fangkai Jiao, Wenpeng Yin, and Meng Jiang. 2025. Advancing Language Models through Instruction Tuning: Recent Progress and Challenges. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts, pages 4–6, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
Advancing Language Models through Instruction Tuning: Recent Progress and Challenges (Zhang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-tutorials.2.pdf