Hierarchical Attention Generates Better Proofs

Jianlong Chen, Chao Li, Yang Yuan, Andrew C Yao


Abstract
Large language models (LLMs) have shown promise in formal theorem proving, but their token-level processing often fails to capture the inherent hierarchical nature of mathematical proofs. We introduce Hierarchical Attention, a regularization method that aligns LLMs’ attention mechanisms with mathematical reasoning structures. Our approach establishes a five-level hierarchy from foundational elements to high-level concepts, ensuring structured information flow in proof generation. Experiments demonstrate that our method improves proof success rates by 2.05% on miniF2F and 1.69% on ProofNet while reducing proof complexity by 23.81% and 16.50% respectively. The code and models will be available.
Anthology ID:
2025.acl-long.856
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
ACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
17506–17520
Language:
URL:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.856/
DOI:
Bibkey:
Cite (ACL):
Jianlong Chen, Chao Li, Yang Yuan, and Andrew C Yao. 2025. Hierarchical Attention Generates Better Proofs. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 17506–17520, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Hierarchical Attention Generates Better Proofs (Chen et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingestion-acl-25/2025.acl-long.856.pdf