Shaozu Yuan
2022
Few-Shot Table Understanding: A Benchmark Dataset and Pre-Training Baseline
Ruixue Liu
|
Shaozu Yuan
|
Aijun Dai
|
Lei Shen
|
Tiangang Zhu
|
Meng Chen
|
Xiaodong He
Proceedings of the 29th International Conference on Computational Linguistics
Few-shot table understanding is a critical and challenging problem in real-world scenario as annotations over large amount of tables are usually costly. Pre-trained language models (PLMs), which have recently flourished on tabular data, have demonstrated their effectiveness for table understanding tasks. However, few-shot table understanding is rarely explored due to the deficiency of public table pre-training corpus and well-defined downstream benchmark tasks, especially in Chinese. In this paper, we establish a benchmark dataset, FewTUD, which consists of 5 different tasks with human annotations to systematically explore the few-shot table understanding in depth. Since there is no large number of public Chinese tables, we also collect a large-scale, multi-domain tabular corpus to facilitate future Chinese table pre-training, which includes one million tables and related natural language text with auxiliary supervised interaction signals. Finally, we present FewTPT, a novel table PLM with rich interactions over tabular data, and evaluate its performance comprehensively on the benchmark. Our dataset and model will be released to the public soon.
2020
The JDDC Corpus: A Large-Scale Multi-Turn Chinese Dialogue Dataset for E-commerce Customer Service
Meng Chen
|
Ruixue Liu
|
Lei Shen
|
Shaozu Yuan
|
Jingyan Zhou
|
Youzheng Wu
|
Xiaodong He
|
Bowen Zhou
Proceedings of the Twelfth Language Resources and Evaluation Conference
Human conversations are complicated and building a human-like dialogue agent is an extremely challenging task. With the rapid development of deep learning techniques, data-driven models become more and more prevalent which need a huge amount of real conversation data. In this paper, we construct a large-scale real scenario Chinese E-commerce conversation corpus, JDDC, with more than 1 million multi-turn dialogues, 20 million utterances, and 150 million words. The dataset reflects several characteristics of human-human conversations, e.g., goal-driven, and long-term dependency among the context. It also covers various dialogue types including task-oriented, chitchat and question-answering. Extra intent information and three well-annotated challenge sets are also provided. Then, we evaluate several retrieval-based and generative models to provide basic benchmark performance on the JDDC corpus. And we hope JDDC can serve as an effective testbed and benefit the development of fundamental research in dialogue task.
Search
Co-authors
- Meng Chen 2
- Ruixue Liu 2
- Lei Shen 2
- Xiaodong He 2
- Jingyan Zhou 1
- show all...