CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning

Joshua Ong Jun Leang, Aryo Pradipta Gema, Shay B Cohen


Abstract
Mathematical reasoning remains a significant challenge for large language models (LLMs), despite progress in prompting techniques such as Chain-of-Thought (CoT). We present **Chain of Mathematically Annotated Thought (CoMAT)**, which enhances reasoning through two stages: *Symbolic Conversion* (converting natural language queries into symbolic form) and *Reasoning Execution* (deriving answers from symbolic representations). CoMAT operates entirely with a single LLM and without external solvers. Across four LLMs, CoMAT outperforms traditional CoT on six out of seven benchmarks, achieving gains of 4.48% on MMLU-Redux (MATH) and 4.58% on GaoKao MCQ. In addition to improved performance, CoMAT ensures faithfulness and verifiability, offering a transparent reasoning process for complex mathematical tasks.
Anthology ID:
2025.emnlp-main.1024
Volume:
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Month:
November
Year:
2025
Address:
Suzhou, China
Editors:
Christos Christodoulopoulos, Tanmoy Chakraborty, Carolyn Rose, Violet Peng
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
20256–20285
Language:
URL:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1024/
DOI:
Bibkey:
Cite (ACL):
Joshua Ong Jun Leang, Aryo Pradipta Gema, and Shay B Cohen. 2025. CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning. In Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing, pages 20256–20285, Suzhou, China. Association for Computational Linguistics.
Cite (Informal):
CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning (Leang et al., EMNLP 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/ingest-emnlp/2025.emnlp-main.1024.pdf
Checklist:
 2025.emnlp-main.1024.checklist.pdf