Cong Wang

Also published as:


2025

pdf bib
Review-Instruct: A Review-Driven Multi-Turn Conversations Generation Method for Large Language Models
Jiangxu Wu | Cong Wang | TianHuang Su | Jun Yang | Haozhi Lin | Chao Zhang | Ming Peng | Kai Shi | SongPan Yang | BinQiang Pan | ZiXian Li
Findings of the Association for Computational Linguistics: ACL 2025

The effectiveness of large language models (LLMs) in conversational AI is hindered by their reliance on single-turn supervised fine-tuning (SFT) data, which limits contextual coherence in multi-turn dialogues. Existing methods for generating multi-turn dialogue data struggle to ensure both diversity and quality in instructions. To address this, we propose Review-Instruct, a novel framework that synthesizes multi-turn conversations through an iterative “Ask-Respond-Review” process involving three agent roles: a Candidate, multiple Reviewers, and a Chairman. The framework iteratively refines instructions by incorporating Reviewer feedback, enhancing dialogue diversity and difficulty. We construct a multi-turn dataset using the Alpaca dataset and fine-tune the LLaMA2-13B model. Evaluations on MT-Bench, MMLU-Pro, and Auto-Arena demonstrate significant improvements, achieving absolute gains of 2.9% on MMLU-Pro and 2% on MT-Bench compared to prior state-of-the-art models based on LLaMA2-13B. Ablation studies confirm the critical role of the Review stage and the use of multiple Reviewers in boosting instruction diversity and difficulty. Our work highlights the potential of review-driven, multi-agent frameworks for generating high-quality conversational data at scale.

2023

pdf bib
Adaptive Gating in Mixture-of-Experts based Language Models
Jiamin Li | Qiang Su | Yitao Yang | Yimin Jiang | Cong Wang | Hong Xu
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Large language models have demonstrated exceptional language understanding capabilities in many NLP tasks. Sparsely activated mixture-of-experts (MoE) has emerged as a promising solution for scaling models while maintaining a constant number of computational operations. Existing MoE models adopt a fixed gating network where each token is computed by the same number of experts. This contradicts our intuition that the tokens in each sequence vary in terms of their linguistic complexity and, consequently, require different computational costs. Little is discussed in prior research on the trade-off between computation per token and model performance. This paper introduces adaptive gating in MoE, a flexible training strategy that allows tokens to be processed by a variable number of experts based on expert probability distribution. Adaptive gating preserves sparsity while improving training efficiency. We further draw upon curriculum learning to better align the order of training samples and maximize the training time savings. Extensive experiments on diverse NLP tasks show that adaptive gating reduces at most 22.5% training time while maintaining inference quality. Moreover, we conduct a comprehensive analysis of the gating decisions and present our insights on which tokens are inherently difficult to process, depending on the specific language task.

2020

pdf bib
基于BiLSTM-CRF的社会突发事件研判方法(Social Emergency Event Judgement based on BiLSTM-CRF)
Huijun Hu (胡慧君) | Cong Wang (王聪) | Jianhua Dai (代建华) | Maofu Liu (刘茂福)
Proceedings of the 19th Chinese National Conference on Computational Linguistics

社会突发事件的分类和等级研判作为应急处置中的一环,其重要性不言而喻。然而,目前研究多数采用人工或规则的方法识别证据进行研判,由于社会突发事件的构成的复杂性和语言描述的灵活性,这对于研判证据识别有很大局限性。本文参考“事件抽取”思想,事件类型和研判证据作为事件中元素,以BiLSTM-CRF方法细粒度的识别,并将二者结合,分类结果作为等级研判的输入,识别出研判证据。最终将识别结果结合注意力机制进行等级研判,通过对研判证据的精准识别从而来增强等级研判的准确性。实验表明,相比人工或规则识别研判证据,本文提出的方法有着更好的鲁棒性,社会突发事件研判时也达到了较好的效果。 关键词:事件分类 ;研判证据识别 ;等级研判 ;BiLSTM-CRF