Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation

Kushal Arora, Layla El Asri, Hareesh Bahuleyan, Jackie Cheung


Abstract
Current language generation models suffer from issues such as repetition, incoherence, and hallucinations. An often-repeated hypothesis for this brittleness of generation models is that it is caused by the training and the generation procedure mismatch, also referred to as exposure bias. In this paper, we verify this hypothesis by analyzing exposure bias from an imitation learning perspective. We show that exposure bias leads to an accumulation of errors during generation, analyze why perplexity fails to capture this accumulation of errors, and empirically show that this accumulation results in poor generation quality.
Anthology ID:
2022.findings-acl.58
Volume:
Findings of the Association for Computational Linguistics: ACL 2022
Month:
May
Year:
2022
Address:
Dublin, Ireland
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
700–710
Language:
URL:
https://aclanthology.org/2022.findings-acl.58
DOI:
10.18653/v1/2022.findings-acl.58
Bibkey:
Cite (ACL):
Kushal Arora, Layla El Asri, Hareesh Bahuleyan, and Jackie Cheung. 2022. Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation. In Findings of the Association for Computational Linguistics: ACL 2022, pages 700–710, Dublin, Ireland. Association for Computational Linguistics.
Cite (Informal):
Why Exposure Bias Matters: An Imitation Learning Perspective of Error Accumulation in Language Generation (Arora et al., Findings 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.58.pdf
Video:
 https://preview.aclanthology.org/ingestion-script-update/2022.findings-acl.58.mp4
Code
 kushalarora/quantifying_exposure_bias
Data
WikiText-103WikiText-2