DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models

Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, Stefano Soatto


Abstract
Visual document understanding (VDU) is a challenging task that involves understanding documents across various modalities (text and image) and layouts (forms, tables, etc.). This study aims to enhance generalizability of small VDU models by distilling knowledge from LLMs. We identify that directly prompting LLMs often fails to generate informative and useful data. In response, we present a new framework (called DocKD) that enriches the data generation process by integrating external document knowledge. Specifically, we provide an LLM with various document elements like key-value pairs, layouts, and descriptions, to elicit open-ended answers. Our experiments show that DocKD produces high-quality document annotations and surpasses the direct knowledge distillation approach that does not leverage external document knowledge. Moreover, student VDU models trained with solely DocKD-generated data is not only comparable to those trained with human-annotated data on in-domain tasks but also significantly excel them on out-of-domain tasks.
Anthology ID:
2024.emnlp-main.185
Volume:
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2024
Address:
Miami, Florida, USA
Editors:
Yaser Al-Onaizan, Mohit Bansal, Yun-Nung Chen
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3167–3193
Language:
URL:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.185/
DOI:
10.18653/v1/2024.emnlp-main.185
Bibkey:
Cite (ACL):
Sungnyun Kim, Haofu Liao, Srikar Appalaraju, Peng Tang, Zhuowen Tu, Ravi Kumar Satzoda, R. Manmatha, Vijay Mahadevan, and Stefano Soatto. 2024. DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models. In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing, pages 3167–3193, Miami, Florida, USA. Association for Computational Linguistics.
Cite (Informal):
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models (Kim et al., EMNLP 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/jlcl-multiple-ingestion/2024.emnlp-main.185.pdf