Seongdeok Kim
2025
Distilling Cross-Modal Knowledge into Domain-Specific Retrievers for Enhanced Industrial Document Understanding
Jinhyeong Lim
|
Jeongwan Shin
|
Seeun Lee
|
Seongdeok Kim
|
Joungsu Choi
|
Jongbae Kim
|
Chun Hwan Jung
|
Youjin Kang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing: Industry Track
Retrieval-Augmented Generation (RAG) has shown strong performance in open-domain tasks, but its effectiveness in industrial domains is limited by a lack of domain understanding and document structural elements (DSE) such as tables, figures, charts, and formula.To address this challenge, we propose an efficient knowledge distillation framework that transfers complementary knowledge from both Large Language Models (LLMs) and Vision-Language Models (VLMs) into a compact domain-specific retriever.Extensive experiments and analysis on real-world industrial datasets from shipbuilding and electrical equipment domains demonstrate that the proposed framework improves both domain understanding and visual-structural retrieval, outperforming larger baselines while requiring significantly less computational complexity.
Search
Fix author
Co-authors
- Joungsu Choi 1
- Chun Hwan Jung 1
- Youjin Kang 1
- Jongbae Kim 1
- Seeun Lee 1
- show all...