Ke Sun


2024

pdf
Adaption-of-Thought: Learning Question Difficulty Improves Large Language Models for Reasoning
Mayi Xu | Yongqi Li | Ke Sun | Tieyun Qian
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large language models (LLMs) have shown excellent capability for solving reasoning problems. Existing approaches do not differentiate the question difficulty when designing prompting methods for them. Clearly, a simple method cannot elicit sufficient knowledge from LLMs to answer a hard question. Meanwhile, a sophisticated one will force the LLM to generate redundant or even inaccurate intermediate steps toward a simple question. Consequently, the performance of existing methods fluctuates among various questions.In this work, we propose Adaption-of-Thought (AdoT), an adaptive method to improve LLMs for the reasoning problem, which first measures the question difficulty and then tailors demonstration set construction and difficulty-adapted retrieval strategies for the adaptive demonstration construction. Experimental results on three reasoning tasks prove the superiority of our proposed method, showing an absolute improvement of up to 5.5% on arithmetic reasoning, 7.4% on symbolic reasoning, and 2.3% on commonsense reasoning. Our codes and implementation details are available at: https://github.com/NLPGM/AdoT

2023

pdf
Gloss-Free End-to-End Sign Language Translation
Kezhou Lin | Xiaohan Wang | Linchao Zhu | Ke Sun | Bang Zhang | Yi Yang
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)

In this paper, we tackle the problem of sign language translation (SLT) without gloss annotations. Although intermediate representation like gloss has been proven effective, gloss annotations are hard to acquire, especially in large quantities. This limits the domain coverage of translation datasets, thus handicapping real-world applications. To mitigate this problem, we design the Gloss-Free End-to-end sign language translation framework (GloFE). Our method improves the performance of SLT in the gloss-free setting by exploiting the shared underlying semantics of signs and the corresponding spoken translation. Common concepts are extracted from the text and used as a weak form of intermediate representation. The global embedding of these concepts is used as a query for cross-attention to find the corresponding information within the learned visual features. In a contrastive manner, we encourage the similarity of query results between samples containing such concepts and decrease those that do not. We obtained state-of-the-art results on large-scale datasets, including OpenASL and How2Sign.

pdf
Dynamic Open-book Prompt for Conversational Recommender System
Xuan Ma | Tieyun Qian | Ke Sun
Findings of the Association for Computational Linguistics: EMNLP 2023

Conversational Recommender System (CRS) aims to deliver personalized recommendations through interactive dialogues. Recent advances in prompt learning have shed light on this task. However, the performance of existing methods is confined by the limited context within ongoing conversations. Moreover, these methods utilize training samples only for prompt parameter training. The constructed prompt lacks the ability to refer to the training data during inference, which exacerbates the problem of limited context. To solve this problem, we propose a novel Dynamic Open-book Prompt approach, where the open book stores users’ experiences in historical data, and we dynamically construct the prompt to memorize the user’s current utterance and selectively retrieve relevant contexts from the open book. Specifically, we first build an item-recommendation graph from the open book and convolute on the graph to form a base prompt which contains more information besides the finite dialogue. Then, we enhance the representation learning process of the prompt by tailoring similar contexts in the graph into the prompt to meet the user’s current need. This ensures the prompt provides targeted suggestions that are both informed and contextually relevant. Extensive experimental results on the ReDial dataset demonstrate the significant improvements achieved by our proposed model over the state-of-the-art methods. Our code and data are available at https://github.com/NLPWM-WHU/DOP.

2020

pdf
DuSQL: A Large-Scale and Pragmatic Chinese Text-to-SQL Dataset
Lijie Wang | Ao Zhang | Kun Wu | Ke Sun | Zhenghua Li | Hua Wu | Min Zhang | Haifeng Wang
Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)

Due to the lack of labeled data, previous research on text-to-SQL parsing mainly focuses on English. Representative English datasets include ATIS, WikiSQL, Spider, etc. This paper presents DuSQL, a larges-scale and pragmatic Chinese dataset for the cross-domain text-to-SQL task, containing 200 databases, 813 tables, and 23,797 question/SQL pairs. Our new dataset has three major characteristics. First, by manually analyzing questions from several representative applications, we try to figure out the true distribution of SQL queries in real-life needs. Second, DuSQL contains a considerable proportion of SQL queries involving row or column calculations, motivated by our analysis on the SQL query distributions. Finally, we adopt an effective data construction framework via human-computer collaboration. The basic idea is automatically generating SQL queries based on the SQL grammar and constrained by the given database. This paper describes in detail the construction process and data statistics of DuSQL. Moreover, we present and compare performance of several open-source text-to-SQL parsers with minor modification to accommodate Chinese, including a simple yet effective extension to IRNet for handling calculation SQL queries.

2013

pdf
A Hierarchical Semantics-Aware Distributional Similarity Scheme
Shuqi Sun | Ke Sun | Shiqi Zhao | Haifeng Wang | Muyun Yang | Sheng Li
Proceedings of the Sixth International Joint Conference on Natural Language Processing

2008

pdf
A Study of Chinese Lexical Analysis Based on Discriminative Models
Guang-Lu Sun | Cheng-Jie Sun | Ke Sun | Xiao-Long Wang
Proceedings of the Sixth SIGHAN Workshop on Chinese Language Processing