Abstract
This paper describes our submission to the fifth track of the 11th Dialog System Technology Challenge (DSTC-11), which focuses on “Task-oriented Conversational Modeling with Subjective Knowledge”. We focus on response generation and leverage a ranking strategy to ensemble individual models of BART, Long-T5, and a fine-tuned large language model based on LLaMA. The strategy is supplemented by other techniques like low rank adaptation to maintain efficient utilization of these large models while still achieving optimal performance. The experiments show that the ensemble method outperforms individual models and the baseline method. Our model was ranked 1st place in ROUGE_1, 2nd place in ROUGE_L score and 4th place in human evaluation among a total of 14 participating teams.- Anthology ID:
- 2023.dstc-1.20
- Volume:
- Proceedings of The Eleventh Dialog System Technology Challenge
- Month:
- September
- Year:
- 2023
- Address:
- Prague, Czech Republic
- Editors:
- Yun-Nung Chen, Paul Crook, Michel Galley, Sarik Ghazarian, Chulaka Gunasekara, Raghav Gupta, Behnam Hedayatnia, Satwik Kottur, Seungwhan Moon, Chen Zhang
- Venues:
- DSTC | WS
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 177–184
- Language:
- URL:
- https://aclanthology.org/2023.dstc-1.20
- DOI:
- Cite (ACL):
- Xin Huang, Kye Min Tan, Richeng Duan, and Bowei Zou. 2023. Ensemble Method via Ranking Model for Conversational Modeling with Subjective Knowledge. In Proceedings of The Eleventh Dialog System Technology Challenge, pages 177–184, Prague, Czech Republic. Association for Computational Linguistics.
- Cite (Informal):
- Ensemble Method via Ranking Model for Conversational Modeling with Subjective Knowledge (Huang et al., DSTC-WS 2023)
- PDF:
- https://preview.aclanthology.org/landing_page/2023.dstc-1.20.pdf