PM3-KIE: A Probabilistic Multi-Task Meta-Model for Document Key Information Extraction

Birgit Kirsch, Héctor Allende-Cid, Stefan Rueping


Abstract
Key Information Extraction (KIE) from visually rich documents is commonly approached as either fine-grained token classification or coarse-grained entity extraction. While token-level models capture spatial and visual cues, entity-level models better represent logical dependencies and align with real-world use cases.We introduce PM3-KIE, a probabilistic multi-task meta-model that incorporates both fine-grained and coarse-grained models. It serves as a lightweight reasoning layer that jointly predicts entities and all appearances in a document. PM3-KIE incorporates domain-specific schema constraints to enforce logical consistency and integrates large language models for semantic validation, thereby reducing extraction errors.Experiments on two public datasets, DeepForm and FARA, show that PM3-KIE outperforms three state-of-the-art models and a stacked ensemble, achieving a statistically significant 2% improvement in F1 score.
Anthology ID:
2025.findings-acl.1075
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20890–20912
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1075/
DOI:
Bibkey:
Cite (ACL):
Birgit Kirsch, Héctor Allende-Cid, and Stefan Rueping. 2025. PM3-KIE: A Probabilistic Multi-Task Meta-Model for Document Key Information Extraction. In Findings of the Association for Computational Linguistics: ACL 2025, pages 20890–20912, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
PM3-KIE: A Probabilistic Multi-Task Meta-Model for Document Key Information Extraction (Kirsch et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.1075.pdf