Abstract
Error analysis aims to provide insights into system errors at different levels of granularity. NLP as a field has a long-standing tradition of analysing and reporting errors which is generally considered good practice. There are existing error taxonomies tailored for different types of NLP task. In this paper, we report our work reviewing existing research on meaning/content error types in generated text, attempt to identify emerging consensus among existing meaning/content error taxonomies, and propose a standardised error taxonomy on this basis. We find that there is virtually complete agreement at the highest taxonomic level where errors of meaning/content divide into (1) Content Omission, (2) Content Addition, and (3) Content Substitution. Consensus in the lower levels is less pronounced, but a compact standardised consensus taxonomy can nevertheless be derived that works across generation tasks and application domains.- Anthology ID:
- 2023.ranlp-1.58
- Volume:
- Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing
- Month:
- September
- Year:
- 2023
- Address:
- Varna, Bulgaria
- Editors:
- Ruslan Mitkov, Galia Angelova
- Venue:
- RANLP
- SIG:
- Publisher:
- INCOMA Ltd., Shoumen, Bulgaria
- Note:
- Pages:
- 527–540
- Language:
- URL:
- https://aclanthology.org/2023.ranlp-1.58
- DOI:
- Cite (ACL):
- Rudali Huidrom and Anya Belz. 2023. Towards a Consensus Taxonomy for Annotating Errors in Automatically Generated Text. In Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing, pages 527–540, Varna, Bulgaria. INCOMA Ltd., Shoumen, Bulgaria.
- Cite (Informal):
- Towards a Consensus Taxonomy for Annotating Errors in Automatically Generated Text (Huidrom & Belz, RANLP 2023)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-1/2023.ranlp-1.58.pdf