Abstract
Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques. However, the current state of conversational recommendation faces numerous challenges due to its relative novelty and limited existing contributions. In this study, we delve into benchmark datasets for developing CRS models and address potential biases arising from the feedback loop inherent in multi-turn interactions, including selection bias and multiple popularity bias variants. Drawing inspiration from the success of generative data via using language models and data augmentation techniques, we present two novel strategies, ‘Once-Aug’ and ‘PopNudge’, to enhance model performance while mitigating biases. Through extensive experiments on ReDial and TG-ReDial benchmark datasets, we show a consistent improvement of CRS techniques with our data augmentation approaches and offer additional insights on addressing multiple newly formulated biases.- Anthology ID:
- 2023.findings-emnlp.233
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2023
- Month:
- December
- Year:
- 2023
- Address:
- Singapore
- Editors:
- Houda Bouamor, Juan Pino, Kalika Bali
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 3609–3622
- Language:
- URL:
- https://aclanthology.org/2023.findings-emnlp.233
- DOI:
- 10.18653/v1/2023.findings-emnlp.233
- Cite (ACL):
- Xi Wang, Hossein Rahmani, Jiqun Liu, and Emine Yilmaz. 2023. Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 3609–3622, Singapore. Association for Computational Linguistics.
- Cite (Informal):
- Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation (Wang et al., Findings 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2023.findings-emnlp.233.pdf