Learning to Generate Equitable Text in Dialogue from Biased Training Data

Anthony Sicilia, Malihe Alikhani


Abstract
The ingrained principles of fairness in a dialogue system’s decision-making process and generated responses are crucial for user engagement, satisfaction, and task achievement. Absence of equitable and inclusive principles can hinder the formation of common ground, which in turn negatively impacts the overall performance of the system. For example, misusing pronouns in a user interaction may cause ambiguity about the intended subject. Yet, there is no comprehensive study of equitable text generation in dialogue. Aptly, in this work, we use theories of computational learning to study this problem. We provide formal definitions of equity in text generation, and further, prove formal connections between learning human-likeness and learning equity: algorithms for improving equity ultimately reduce to algorithms for improving human-likeness (on augmented data). With this insight, we also formulate reasonable conditions under which text generation algorithms can learn to generate equitable text without any modifications to the biased training data on which they learn. To exemplify our theory in practice, we look at a group of algorithms for the GuessWhat?! visual dialogue game and, using this example, test our theory empirically. Our theory accurately predicts relative-performance of multiple algorithms in generating equitable text as measured by both human and automated evaluation.
Anthology ID:
2023.acl-long.163
Volume:
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2898–2917
Language:
URL:
https://aclanthology.org/2023.acl-long.163
DOI:
10.18653/v1/2023.acl-long.163
Bibkey:
Cite (ACL):
Anthony Sicilia and Malihe Alikhani. 2023. Learning to Generate Equitable Text in Dialogue from Biased Training Data. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2898–2917, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Learning to Generate Equitable Text in Dialogue from Biased Training Data (Sicilia & Alikhani, ACL 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-2024-clasp/2023.acl-long.163.pdf
Video:
 https://preview.aclanthology.org/ingest-2024-clasp/2023.acl-long.163.mp4