Shuang Cheng
2024
Unlocking Continual Learning Abilities in Language Models
Wenyu Du
|
Shuang Cheng
|
Tongxu Luo
|
Zihan Qiu
|
Zeyu Huang
|
Ka Chun Cheung
|
Reynold Cheng
|
Jie Fu
Findings of the Association for Computational Linguistics: EMNLP 2024
2023
Enhancing Multilingual Document-Grounded Dialogue Using Cascaded Prompt-Based Post-Training Models
Jun Liu
|
Shuang Cheng
|
Zineng Zhou
|
Yang Gu
|
Jian Ye
|
Haiyong Luo
Proceedings of the Third DialDoc Workshop on Document-grounded Dialogue and Conversational Question Answering
The Dialdoc23 shared task presents a Multilingual Document-Grounded Dialogue Systems (MDGDS) challenge, where system responses are generated in multiple languages using user’s queries, historical dialogue records and relevant passages. A major challenge for this task is the limited training data available in low-resource languages such as French and Vietnamese. In this paper, we propose Cascaded Prompt-based Post-training Models, dividing the task into three subtasks: Retrieval, Reranking and Generation. We conduct post-training on high-resource language such as English and Chinese to enhance performance of low-resource languages by using the similarities of languages. Additionally, we utilize the prompt method to activate model’s ability on diverse languages within the dialogue domain and explore which prompt is a good prompt. Our comprehensive experiments demonstrate the effectiveness of our proposed methods, which achieved the first place on the leaderboard with a total score of 215.40 in token-level F1, SacreBleu, and Rouge-L metrics.
Search
Co-authors
- Haiyong Luo 1
- Jian Ye 1
- Jie Fu 1
- Jun Liu 1
- Ka Chun Cheung 1
- show all...