Abstract
Automated evaluation of text generation systems has recently seen increasing attention, particularly checking whether generated text stays truthful to input sources.Existing methods frequently rely on an evaluation using task-specific language models, which in turn allows for little interpretability of generated scores.We introduce SRLScore, a reference-free evaluation metric designed with text summarization in mind. Our approach generates fact tuples constructed from Semantic Role Labels, applied to both input and summary texts.A final factuality score is computed by an adjustable scoring mechanism, which allows for easy adaption of the method across domains. Correlation with human judgments on English summarization datasets shows that SRLScore is competitive with state-of-the-art methods and exhibits stable generalization across datasets without requiring further training or hyperparameter tuning.We experiment with an optional co-reference resolution step, but find that the performance boost is mostly outweighed by the additional compute required.Our metric is available online at: https://github.com/heyjing/SRLScore- Anthology ID:
- 2023.starsem-1.9
- Volume:
- Proceedings of the The 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023)
- Month:
- July
- Year:
- 2023
- Address:
- Toronto, Canada
- Venue:
- *SEM
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 89–100
- Language:
- URL:
- https://aclanthology.org/2023.starsem-1.9
- DOI:
- Cite (ACL):
- Jing Fan, Dennis Aumiller, and Michael Gertz. 2023. Evaluating Factual Consistency of Texts with Semantic Role Labeling. In Proceedings of the The 12th Joint Conference on Lexical and Computational Semantics (*SEM 2023), pages 89–100, Toronto, Canada. Association for Computational Linguistics.
- Cite (Informal):
- Evaluating Factual Consistency of Texts with Semantic Role Labeling (Fan et al., *SEM 2023)
- PDF:
- https://preview.aclanthology.org/paclic-22-ingestion/2023.starsem-1.9.pdf