Xinting Huang


2024

pdf
DoG-Instruct: Towards Premium Instruction-Tuning Data via Text-Grounded Instruction Wrapping
Yongrui Chen | Haiyun Jiang | Xinting Huang | Shuming Shi | Guilin Qi
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)

The improvement of LLMs’ instruction-following capabilities relies heavily on the availability of high-quality instruction-response pairs. Unfortunately, the current methods used to collect the pairs suffer from either unaffordable labor costs or severe hallucinations in the self-generation of LLM.To tackle these challenges, this paper proposes a scalable solution.It involves training LLMs to generate instruction-response pairs based on human-written documents, rather than relying solely on self-generation without context.Our proposed method not only exploits the advantages of human-written documents in reducing hallucinations but also utilizes an LLM to wrap the expression of documents, which enables us to bridge the gap between various document styles and the standard AI response.Experiments demonstrate that our method outperforms existing typical methods on multiple benchmarks.In particular, compared to the best-performing baseline, the LLM trained using our generated dataset exhibits a 10% relative improvement in performance on AlpacaEval, despite utilizing only 1/5 of its training data.Furthermore, a comprehensive manual evaluation validates the quality of the data we generated.

2023

pdf
Long-Range Language Modeling with Selective Cache
Xinting Huang | Nora Hollenstein
Findings of the Association for Computational Linguistics: EMNLP 2023

The computational cost of transformer-based language models grows quadratically with the sequence length. In this paper, we introduce the selective cache, which stores the selected key-value pairs from the previous context. By selecting important key-value pairs the model makes better use of the cache so that in limited cache size, a longer context history can be stored. We design three kinds of selection methods. The first is based on human language processing. The key-value pairs are selected if they correspond to tokens that are fixated longer, as recorded in eye-tracking-while-reading experiments. We also incorporate the cognitively-inspired selection process into the language model as a trainable process, resulting in two additional methods with improved performance. The selection task is converted into a pruning task so they can be trained with differentiable masks. We demonstrate that the proposed selective cache improves the language modeling performance across different datasets. With the same number of stored key-value pairs (cache size), our selective cache outperforms XL cache and compressive cache by considerable margins.

pdf
Pre-training Multi-party Dialogue Models with Latent Discourse Inference
Yiyang Li | Xinting Huang | Wei Bi | Hai Zhao
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

Multi-party dialogues are more difficult for models to understand than one-to-one two-party dialogues, since they involve multiple interlocutors, resulting in interweaving reply-to relations and information flows. To step over these obstacles, an effective way is to pre-train a model that understands the discourse structure of multi-party dialogues, namely, to whom each utterance is replying. However, due to the lack of explicitly annotated discourse labels in multi-party dialogue corpora, previous works fail to scale up the pre-training process by putting aside the unlabeled multi-party conversational data for nothing. To fully utilize the unlabeled data, we propose to treat the discourse structures as latent variables, then jointly infer them and pre-train the discourse-aware model by unsupervised latent variable inference methods. Experiments on multiple downstream tasks show that our pre-trained model outperforms strong baselines by large margins and achieves state-of-the-art (SOTA) results, justifying the effectiveness of our method. The official implementation of this paper is available at https://github.com/EricLee8/MPD_EMVI.

pdf
Effidit: An Assistant for Improving Writing Efficiency
Shuming Shi | Enbo Zhao | Wei Bi | Deng Cai | Leyang Cui | Xinting Huang | Haiyun Jiang | Duyu Tang | Kaiqiang Song | Longyue Wang | Chenyan Huang | Guoping Huang | Yan Wang | Piji Li
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

Writing assistants are valuable tools that can help writers improve their writing skills. We introduce Effidit (Efficient and Intelligent Editing), a digital writing assistant that facilitates users to write higher-quality text more efficiently through the use of Artificial Intelligence (AI) and Natural Language Processing (NLP) technologies. We significantly expand the capacities of a writing assistantby providing functions in three modules: text completion, hint recommendation, and writing refinement. Based on the above efforts, Effidit can efficiently assist users in creating their own text. Effidit has been deployed to several Tencent products and publicly released at https://effidit.qq.com/.

pdf
Explore-Instruct: Enhancing Domain-Specific Instruction Coverage through Active Exploration
Fanqi Wan | Xinting Huang | Tao Yang | Xiaojun Quan | Wei Bi | Shuming Shi
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Instruction-tuning can be substantially optimized through enhanced diversity, resulting in models capable of handling a broader spectrum of tasks. However, existing data employed for such tuning often exhibit an inadequate coverage of individual domains, limiting the scope for nuanced comprehension and interactions within these areas. To address this deficiency, we propose Explore-Instruct, a novel approach to enhance the data coverage to be used in domain-specific instruction-tuning through active exploration via Large Language Models (LLMs). Built upon representative domain use cases, Explore-Instruct explores a multitude of variations or possibilities by implementing a search algorithm to obtain diversified and domain-focused instruction-tuning data. Our data-centric analysis validates the effectiveness of this proposed approach in improving domain-specific instruction coverage. Moreover, our model’s performance demonstrates considerable advancements over multiple baselines, including those utilizing domain-specific data enhancement. Our findings offer a promising opportunity to improve instruction coverage, especially in domain-specific contexts, thereby advancing the development of adaptable language models. Our code, model weights, and data are public at https://github.com/fanqiwan/Explore-Instruct.

2022

pdf
Robust Task-Oriented Dialogue Generation with Contrastive Pre-training and Adversarial Filtering
Shiquan Yang | Xinting Huang | Jey Han Lau | Sarah Erfani
Findings of the Association for Computational Linguistics: EMNLP 2022

Data artifacts incentivize machine learning models to learn non-transferable generalizations by taking advantage of shortcuts in the data, andthere is growing evidence that data artifacts play a role for the strong results that deep learning models achieve in recent natural language processing benchmarks.In this paper, we focus on task-oriented dialogue and investigate whether popular datasets such as MultiWOZ contain such data artifacts.We found that by only keeping frequent phrases in the trainingexamples, state-of-the-art models perform similarly compared to the variant trained with full data, suggesting they exploit these spurious correlationsto solve the task. Motivated by this, we propose a contrastive learning based framework to encourage the model to ignore these cues and focus on learning generalisable patterns. We also experiment with adversarial filtering to remove easy training instances so that the model would focus on learning from the harder instances. We conduct a number of generalization experiments — e.g., cross-domain/dataset and adversarial tests — to assess the robustness of our approach and found that it works exceptionally well.

2021

pdf
Latent Reasoning for Low-Resource Question Generation
Xinting Huang | Jianzhong Qi | Yu Sun | Rui Zhang
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021

2020

pdf
Semi-Supervised Dialogue Policy Learning via Stochastic Reward Estimation
Xinting Huang | Jianzhong Qi | Yu Sun | Rui Zhang
Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics

Dialogue policy optimization often obtains feedback until task completion in task-oriented dialogue systems. This is insufficient for training intermediate dialogue turns since supervision signals (or rewards) are only provided at the end of dialogues. To address this issue, reward learning has been introduced to learn from state-action pairs of an optimal policy to provide turn-by-turn rewards. This approach requires complete state-action annotations of human-to-human dialogues (i.e., expert demonstrations), which is labor intensive. To overcome this limitation, we propose a novel reward learning approach for semi-supervised policy learning. The proposed approach learns a dynamics model as the reward function which models dialogue progress (i.e., state-action sequences) based on expert demonstrations, either with or without annotations. The dynamics model computes rewards by predicting whether the dialogue progress is consistent with expert demonstrations. We further propose to learn action embeddings for a better generalization of the reward function. The proposed approach outperforms competitive policy learning baselines on MultiWOZ, a benchmark multi-domain dataset.

pdf
KaLM at SemEval-2020 Task 4: Knowledge-aware Language Models for Comprehension and Generation
Jiajing Wan | Xinting Huang
Proceedings of the Fourteenth Workshop on Semantic Evaluation

This paper presents our strategies in SemEval 2020 Task 4: Commonsense Validation and Explanation. We propose a novel way to search for evidence and choose the different large-scale pre-trained models as the backbone for three subtasks. The results show that our evidence-searching approach improves model performance on commonsense explanation task. Our team ranks 2nd in subtask C according to human evaluation score.

pdf
Generalizable and Explainable Dialogue Generation via Explicit Action Learning
Xinting Huang | Jianzhong Qi | Yu Sun | Rui Zhang
Findings of the Association for Computational Linguistics: EMNLP 2020

Response generation for task-oriented dialogues implicitly optimizes two objectives at the same time: task completion and language quality. Conditioned response generation serves as an effective approach to separately and better optimize these two objectives. Such an approach relies on system action annotations which are expensive to obtain. To alleviate the need of action annotations, latent action learning is introduced to map each utterance to a latent representation. However, this approach is prone to over-dependence on the training data, and the generalization capability is thus restricted. To address this issue, we propose to learn natural language actions that represent utterances as a span of words. This explicit action representation promotes generalization via the compositional structure of language. It also enables an explainable generation process. Our proposed unsupervised approach learns a memory component to summarize system utterances into a short span of words. To further promote a compact action representation, we propose an auxiliary task that restores state annotations as the summarized dialogue context using the memory component. Our proposed approach outperforms latent action baselines on MultiWOZ, a benchmark multi-domain dataset.