System Report for CCL25-Eval Task 4: From Plain to Hierarchical —Knowledge-Augmented Prompting for Chinese Factivity Inference

Minjun Park, Seulki Lee


Abstract
"To improve the factivity inference capability of large language models (LLMs), we adopted a Retrieval-Augmented Generation (RAG) framework using a curated bibliography on Chinese factivity semantics. We compared a baseline without retrieval against two RAG-based strategies, showing that hierarchical prompting with RAPTOR yields the high-est accuracy. Using recursive summarization from the bottom up, RAPTOR allows models to access document context at multiple abstraction levels, resulting in more accurate and stable inference. Our findings contribute to deeper Chinese semantic inference through linguistic knowledge-augmented prompting in factivity inference and textual entailment."
Anthology ID:
2025.ccl-2.12
Volume:
Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025)
Month:
August
Year:
2025
Address:
Jinan, China
Editors:
Hongfei Lin, Bin Li, Hongye Tan
Venue:
CCL
SIG:
Publisher:
Chinese Information Processing Society of China
Note:
Pages:
105–109
Language:
URL:
https://preview.aclanthology.org/ingest-ccl/2025.ccl-2.12/
DOI:
Bibkey:
Cite (ACL):
Minjun Park and Seulki Lee. 2025. System Report for CCL25-Eval Task 4: From Plain to Hierarchical —Knowledge-Augmented Prompting for Chinese Factivity Inference. In Proceedings of the 24th China National Conference on Computational Linguistics (CCL 2025), pages 105–109, Jinan, China. Chinese Information Processing Society of China.
Cite (Informal):
System Report for CCL25-Eval Task 4: From Plain to Hierarchical —Knowledge-Augmented Prompting for Chinese Factivity Inference (Park & Lee, CCL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-ccl/2025.ccl-2.12.pdf