Minjun Park
Also published as: MinJun Park
2025
LLM Meets Scene Graph: Can Large Language Models Understand and Generate Scene Graphs? A Benchmark and Empirical Study
Dongil Yang | Minjin Kim | Sunghwan Kim | Beong-woo Kwak | Minjun Park | Jinseok Hong | Woontack Woo | Jinyoung Yeo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Dongil Yang | Minjin Kim | Sunghwan Kim | Beong-woo Kwak | Minjun Park | Jinseok Hong | Woontack Woo | Jinyoung Yeo
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
The remarkable reasoning and generalization capabilities of Large Language Models (LLMs) have paved the way for their expanding applications in embodied AI, robotics, and other real-world tasks. To effectively support these applications, grounding in spatial and temporal understanding in multimodal environments is essential. To this end, recent works have leveraged scene graphs, a structured representation that encodes entities, attributes, and their relationships in a scene. However, a comprehensive evaluation of LLMs’ ability to utilize scene graphs remains limited. In this work, we introduce Text-Scene Graph (TSG) Bench, a benchmark designed to systematically assess LLMs’ ability to (1) understand scene graphs and (2) generate them from textual narratives. With TSG Bench we evaluate 11 LLMs and reveal that, while models perform well on scene graph understanding, they struggle with scene graph generation, particularly for complex narratives. Our analysis indicates that these models fail to effectively decompose discrete scenes from a complex narrative, leading to a bottleneck when generating scene graphs. These findings underscore the need for improved methodologies in scene graph generation and provide valuable insights for future research. The demonstration of our benchmark is available at https://tsg-bench.netlify.app. Additionally, our code and evaluation data are publicly available at https://github.com/docworlds/tsg-bench.
System Report for CCL25-Eval Task 4: From Plain to Hierarchical —Knowledge-Augmented Prompting for Chinese Factivity Inference
Minjun Park | Seulki Lee
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Minjun Park | Seulki Lee
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
"To improve the factivity inference capability of large language models (LLMs), we adopted a Retrieval-Augmented Generation (RAG) framework using a curated bibliography on Chinese factivity semantics. We compared a baseline without retrieval against two RAG-based strategies, showing that hierarchical prompting with RAPTOR yields the high-est accuracy. Using recursive summarization from the bottom up, RAPTOR allows models to access document context at multiple abstraction levels, resulting in more accurate and stable inference. Our findings contribute to deeper Chinese semantic inference through linguistic knowledge-augmented prompting in factivity inference and textual entailment."