Weisi Fan
2024
On the Intractability to Synthesize Factual Inconsistencies in Summarization
Ge Luo
|
Weisi Fan
|
Miaoran Li
|
Youbiao He
|
Yinfei Yang
|
Forrest Bao
Findings of the Association for Computational Linguistics: EACL 2024
Factual consistency detection has gotten raised attention in the task of abstractive summarization. Many existing works rely on synthetic training data, which may not accurately reflect or match the inconsistencies produced by summarization models. In this paper, we first systematically analyze the shortcomings of the current methods in synthesizing inconsistent summaries. Current synthesis methods may fail to produce inconsistencies of coreference errors and discourse errors, per our quantitative and qualitative study. Then, employing the parameter-efficient finetuning (PEFT) technique, we discover that a competitive factual consistency detector can be achieved using thousands of real model-generated summaries with human annotations. Our study demonstrates the importance of real machine-generated texts with human annotation in NLG evaluation as our model outperforms the SOTA on the CoGenSumm, FactCC, Frank, and SummEval datasets.
SummaCoz: A Dataset for Improving the Interpretability of Factual Consistency Detection for Summarization
Ge Luo
|
Weisi Fan
|
Miaoran Li
|
Guoruizhe Sun
|
Runlong Zhang
|
Chenyu Xu
|
Forrest Sheng Bao
Findings of the Association for Computational Linguistics: EMNLP 2024
Summarization is an important application of Large Language Models (LLMs). When judging the quality of a summary, factual consistency holds a significant weight. Despite numerous efforts dedicated to building factual inconsistency detectors, the exploration of explanability remains limited among existing effort. In this study, we incorporate both human-annotated and model-generated natural language explanations elucidating how a summary deviates and thus becomes inconsistent with its source article. We build our explanation-augmented dataset on top of the widely used SummaC summarization consistency benchmark. Additionally, we develop an inconsistency detector that is jointly trained with the collected explanations. Our findings demonstrate that integrating explanations during training not only enables the model to provide rationales for its judgments but also enhances its accuracy significantly.
Search
Co-authors
- Ge Luo 2
- Miaoran Li 2
- Forrest Bao 2
- Youbiao He 1
- Yinfei Yang 1
- show all...