Weichang Liu
2025
TwT: Thinking without Tokens by Habitual Reasoning Distillation with Multi-Teachers’ Guidance
Jingxian Xu
|
Mengyu Zhou
|
Weichang Liu
|
Hanbing Liu
|
Shi Han
|
Dongmei Zhang
Findings of the Association for Computational Linguistics: EMNLP 2025
Large Language Models (LLMs) have made significant strides in problem-solving by incorporating reasoning processes. However, this enhanced reasoning capability results in an increased number of output tokens during inference, leading to higher computational costs. To address this challenge, we propose TwT (Thinking without Tokens), a method that reduces inference-time costs through habitual reasoning distillation with multi-teachers’ guidance, while maintaining high performance. Our approach introduces a Habitual Reasoning Distillation method, which internalizes explicit reasoning into the model’s habitual behavior through a Teacher-Guided compression strategy inspired by human cognition. Additionally, we propose Dual-Criteria Rejection Sampling (DCRS), a technique that generates a high-quality and diverse distillation dataset using multiple teacher models, making our method suitable for unsupervised scenarios. Experimental results demonstrate that TwT effectively reduces inference costs while preserving superior performance, achieving up to a 13.6% improvement in accuracy with fewer output tokens compared to other distillation methods, offering a highly practical solution for efficient LLM deployment.
2023
Addressing NER Annotation Noises with Uncertainty-Guided Tree-Structured CRFs
Jian Liu
|
Weichang Liu
|
Yufeng Chen
|
Jinan Xu
|
Zhe Zhao
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing
Real-world named entity recognition (NER) datasets are notorious for their noisy nature, attributed to annotation errors, inconsistencies, and subjective interpretations. Such noises present a substantial challenge for traditional supervised learning methods. In this paper, we present a new and unified approach to tackle annotation noises for NER. Our method considers NER as a constituency tree parsing problem, utilizing a tree-structured Conditional Random Fields (CRFs) with uncertainty evaluation for integration. Through extensive experiments conducted on four real-world datasets, we demonstrate the effectiveness of our model in addressing both partial and incorrect annotation errors. Remarkably, our model exhibits superb performance even in extreme scenarios with 90% annotation noise.
Search
Fix author
Co-authors
- Yufeng Chen (陈钰枫) 1
- Shi Han 1
- Jian Liu (刘健) 1
- Hanbing Liu 1
- Jinan Xu (徐金安) 1
- show all...