R-TeaFor: Regularized Teacher-Forcing for Abstractive Summarization

Guan-Yu Lin, Pu-Jen Cheng


Abstract
Teacher-forcing is widely used in training sequence generation models to improve sampling efficiency and to stabilize training. However, teacher-forcing is vulnerable to the exposure bias problem. Previous works have attempted to address exposure bias by modifying the training data to simulate model-generated results. Nevertheless, they do not consider the pairwise relationship between the original training data and the modified ones, which provides more information during training. Hence, we propose Regularized Teacher-Forcing (R-TeaFor) to utilize this relationship for better regularization. Empirically, our experiments show that R-TeaFor outperforms previous summarization state-of-the-art models, and the results can be generalized to different pre-trained models.
Anthology ID:
2022.emnlp-main.423
Volume:
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing
Month:
December
Year:
2022
Address:
Abu Dhabi, United Arab Emirates
Editors:
Yoav Goldberg, Zornitsa Kozareva, Yue Zhang
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6303–6311
Language:
URL:
https://aclanthology.org/2022.emnlp-main.423
DOI:
10.18653/v1/2022.emnlp-main.423
Bibkey:
Cite (ACL):
Guan-Yu Lin and Pu-Jen Cheng. 2022. R-TeaFor: Regularized Teacher-Forcing for Abstractive Summarization. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 6303–6311, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Cite (Informal):
R-TeaFor: Regularized Teacher-Forcing for Abstractive Summarization (Lin & Cheng, EMNLP 2022)
Copy Citation:
PDF:
https://preview.aclanthology.org/naacl24-info/2022.emnlp-main.423.pdf