“What is the value of templates?” Rethinking Document Information Extraction Datasets for LLMs
Ran Zmigrod, Pranav Shetty, Mathieu Sibue, Zhiqiang Ma, Armineh Nourbakhsh, Xiaomo Liu, Manuela Veloso
Abstract
The rise of large language models (LLMs) for visually rich document understanding (VRDU) has kindled a need for prompt-response, document-based datasets. As annotating new datasets from scratch is labor-intensive, the existing literature has generated prompt-response datasets from available resources using simple templates. For the case of key information extraction (KIE), one of the most common VRDU tasks, past work has typically employed the template “What is the value for the key?”. However, given the variety of questions encountered in the wild, simple and uniform templates are insufficient for creating robust models in research and industrial contexts. In this work, we present K2Q, a diverse collection of five datasets converted from KIE to a prompt-response format using a plethora of bespoke templates. The questions in K2Q can span multiple entities and be extractive or boolean. We empirically compare the performance of seven baseline generative models on K2Q with zero-shot prompting. We further compare three of these models when training on K2Q versus training on simpler templates to motivate the need of our work. We find that creating diverse and intricate KIE questions enhances the performance and robustness of VRDU models. We hope this work encourages future studies on data quality for generative model training.- Anthology ID:
- 2024.findings-emnlp.770
- Volume:
- Findings of the Association for Computational Linguistics: EMNLP 2024
- Month:
- November
- Year:
- 2024
- Address:
- Miami, Florida, USA
- Editors:
- Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
- Venue:
- Findings
- SIG:
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 13162–13185
- Language:
- URL:
- https://aclanthology.org/2024.findings-emnlp.770
- DOI:
- 10.18653/v1/2024.findings-emnlp.770
- Cite (ACL):
- Ran Zmigrod, Pranav Shetty, Mathieu Sibue, Zhiqiang Ma, Armineh Nourbakhsh, Xiaomo Liu, and Manuela Veloso. 2024. “What is the value of templates?” Rethinking Document Information Extraction Datasets for LLMs. In Findings of the Association for Computational Linguistics: EMNLP 2024, pages 13162–13185, Miami, Florida, USA. Association for Computational Linguistics.
- Cite (Informal):
- “What is the value of templates?” Rethinking Document Information Extraction Datasets for LLMs (Zmigrod et al., Findings 2024)
- PDF:
- https://preview.aclanthology.org/landing_page/2024.findings-emnlp.770.pdf