Qianhao Yuan
2025
ConsistentChat: Building Skeleton-Guided Consistent Multi-Turn Dialogues for Large Language Models from Scratch
Jiawei Chen
|
Xinyan Guan
|
Qianhao Yuan
|
Mo Guozhao
|
Weixiang Zhou
|
Yaojie Lu
|
Hongyu Lin
|
Ben He
|
Le Sun
|
Xianpei Han
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Current instruction data synthesis methods primarily focus on single-turn instructions and often neglect cross-turn coherence, resulting in context drift and reduced task completion rates in extended conversations. To address this limitation, we propose Skeleton-Guided Multi-Turn Dialogue Generation, a framework that constrains multi-turn instruction synthesis by explicitly modeling human conversational intent. It operates in two stages: (1) Intent Modeling, which captures the global structure of human dialogues by assigning each conversation to one of nine well-defined intent trajectories, ensuring a coherent and goal-oriented information flow; and (2) Skeleton Generation, which constructs a structurally grounded sequence of user queries aligned with the modeled intent, thereby serving as a scaffold that constrains and guides the downstream instruction synthesis process. Based on this process, we construct ConsistentChat, a multi-turn instruction dataset with approximately 15,000 multi-turn conversations and 224,392 utterances. Experiments on the Light, Topdial, and MT-Eval benchmarks show that models fine-tuned on ConsistentChat achieve a 20–30% improvement in chat consistency and up to a 15% increase in task success rate, significantly outperforming models trained on existing single-turn and multi-turn instruction datasets.
ShortGPT: Layers in Large Language Models are More Redundant Than You Expect
Xin Men
|
Mingyu Xu
|
Qingyu Zhang
|
Qianhao Yuan
|
Bingning Wang
|
Hongyu Lin
|
Yaojie Lu
|
Xianpei Han
|
Weipeng Chen
Findings of the Association for Computational Linguistics: ACL 2025
As Large Language Models (LLMs) continue to advance, their computational overhead has increased significantly. In this study, we identify notable redundancy across the layers of LLMs, where some layers contribute minimally to the overall network functionality. To quantify this, we introduce a metric called Block Influence (BI), which measures the importance of each layer based on the similarity between its input and output. Based on the observation of layer redundancy, we propose straightforward pruning methods for different tasks: ShortGPT for multiple-choice tasks and ShortGPT-gen for generative tasks. They prune redundant layers based on their BI scores. Our methods demonstrate superior performance over previous pruning methods. The ability to achieve better results through simple layer pruning, as opposed to more complex pruning techniques, suggests a high degree of redundancy across layers. We hope this work will contribute to future research for improving LLM efficiency.
Search
Fix author
Co-authors
- Xianpei Han 2
- Hongyu Lin 2
- Yaojie Lu 2
- Jiawei Chen (陈佳炜) 1
- Weipeng Chen 1
- show all...