Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains

Xu Chu, Zhijie Tan, Hanlin Xue, Guanyu Wang, Tong Mo, Weiping Li


Abstract
Large Language Models (LLMs) are widely applied to downstream domains. However, current LLMs for high-stakes domain tasks, such as financial investment and legal QA, typically generate brief answers without reasoning processes and explanations. This limits users’ confidence in making decisions based on their responses. While original CoT shows promise, it lacks self-correction mechanisms during reasoning. This work introduces Domaino1s, which enhances LLMs’ reasoning capabilities on domain tasks through supervised fine-tuning and tree search. We construct CoT-stock-2k and CoT-legal-2k datasets for fine-tuning models that activate domain-specific reasoning steps based on their judgment. Additionally, we propose Selective Tree Exploration to spontaneously explore solution spaces and sample optimal reasoning paths to improve performance. We also introduce PROOF-Score, a new metric for evaluating domain models’ explainability, complementing traditional accuracy metrics with richer assessment dimensions. Extensive experiments on stock investment recommendation and legal reasoning QA tasks demonstrate Domaino1s’s leading performance and explainability. Our code is available at https://anonymous.4open.science/r/Domaino1s-006F/.
Anthology ID:
2025.findings-acl.171
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
3275–3293
Language:
URL:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.171/
DOI:
10.18653/v1/2025.findings-acl.171
Bibkey:
Cite (ACL):
Xu Chu, Zhijie Tan, Hanlin Xue, Guanyu Wang, Tong Mo, and Weiping Li. 2025. Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains. In Findings of the Association for Computational Linguistics: ACL 2025, pages 3275–3293, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Domaino1s: Guiding LLM Reasoning for Explainable Answers in High-Stakes Domains (Chu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/mtsummit-25-ingestion/2025.findings-acl.171.pdf