Abstract
Neural models for response generation produce responses that are semantically plausible but not necessarily factually consistent with facts describing the speaker’s persona. These models are trained with fully supervised learning where the objective function barely captures factual consistency. We propose to fine-tune these models by reinforcement learning and an efficient reward function that explicitly captures the consistency between a response and persona facts as well as semantic plausibility. Our automatic and human evaluations on the PersonaChat corpus confirm that our approach increases the rate of responses that are factually consistent with persona facts over its supervised counterpart while retains the language quality of responses.- Anthology ID:
- 2021.eacl-main.44
- Volume:
- Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume
- Month:
- April
- Year:
- 2021
- Address:
- Online
- Editors:
- Paola Merlo, Jorg Tiedemann, Reut Tsarfaty
- Venue:
- EACL
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 549–562
- Language:
- URL:
- https://aclanthology.org/2021.eacl-main.44
- DOI:
- 10.18653/v1/2021.eacl-main.44
- Cite (ACL):
- Mohsen Mesgar, Edwin Simpson, and Iryna Gurevych. 2021. Improving Factual Consistency Between a Response and Persona Facts. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 549–562, Online. Association for Computational Linguistics.
- Cite (Informal):
- Improving Factual Consistency Between a Response and Persona Facts (Mesgar et al., EACL 2021)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-3/2021.eacl-main.44.pdf
- Data
- ConvAI2