@inproceedings{duan-etal-2024-alleviating,
    title = "Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising",
    author = "Duan, Jiaxin  and
      Lu, Fengyu  and
      Liu, Junfei",
    editor = "Calzolari, Nicoletta  and
      Kan, Min-Yen  and
      Hoste, Veronique  and
      Lenci, Alessandro  and
      Sakti, Sakriani  and
      Xue, Nianwen",
    booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
    month = may,
    year = "2024",
    address = "Torino, Italia",
    publisher = "ELRA and ICCL",
    url = "https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.66/",
    pages = "739--750",
    abstract = "Abstractive summarization commonly suffers from exposure bias caused by supervised teacher-force learning, that a model predicts the next token conditioned on the accurate pre-context during training while on its preceding outputs at inference. Existing solutions bridge this gap through un- or semi-supervised holistic learning yet still leave the risk of error accumulation while generating a summary. In this paper, we attribute this problem to the limitation of unidirectional autoregressive text generation and introduce post-processing steps to alleviate it. Specifically, we reformat abstractive summarization to sequential generation and revision (SeGRe), i.e., a model in the revision phase re-inputs the generated summary and refines it by contrasting it with the source document. This provides the model additional opportunities to assess the flawed summary from a global view and thereby modify inappropriate expressions. Moreover, we train the SeGRe model with a regularized minimum-risk policy to ensure effective generation and revision. A lot of comparative experiments are implemented on two well-known datasets, exhibiting the new or matched state-of-the-art performance of SeGRe."
}Markdown (Informal)
[Alleviating Exposure Bias in Abstractive Summarization via Sequentially Generating and Revising](https://preview.aclanthology.org/ingest-emnlp/2024.lrec-main.66/) (Duan et al., LREC-COLING 2024)
ACL