Distilling Cross-Modal Knowledge into Domain-Specific Retrievers for Enhanced Industrial Document Understanding

Jinhyeong Lim, Jeongwan Shin, Seeun Lee, Seongdeok Kim, Joungsu Choi, Jongbae Kim, Chun Hwan Jung, Youjin Kang


Abstract
Retrieval-Augmented Generation (RAG) has shown strong performance in open-domain tasks, but its effectiveness in industrial domains is limited by a lack of domain understanding and document structural elements (DSE) such as tables, figures, charts, and formula.To address this challenge, we propose an efficient knowledge distillation framework that transfers complementary knowledge from both Large Language Models (LLMs) and Vision-Language Models (VLMs) into a compact domain-specific retriever.Extensive experiments and analysis on real-world industrial datasets from shipbuilding and electrical equipment domains demonstrate that the proposed framework improves both domain understanding and visual-structural retrieval, outperforming larger baselines while requiring significantly less computational complexity.
Anthology ID:
2025.emnlp-industry.173
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Month:
November
Year:
2025
Address:
Suzhou (China)
Editors:
Saloni Potdar, Lina Rojas-Barahona, Sebastien Montella
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
2551–2563
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-industry.173/
DOI:
Bibkey:
Cite (ACL):
Jinhyeong Lim, Jeongwan Shin, Seeun Lee, Seongdeok Kim, Joungsu Choi, Jongbae Kim, Chun Hwan Jung, and Youjin Kang. 2025. Distilling Cross-Modal Knowledge into Domain-Specific Retrievers for Enhanced Industrial Document Understanding. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track, pages 2551–2563, Suzhou (China). Association for Computational Linguistics.
Cite (Informal):
Distilling Cross-Modal Knowledge into Domain-Specific Retrievers for Enhanced Industrial Document Understanding (Lim et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-industry.173.pdf