DiPT: Enhancing LLM Reasoning through Diversified Perspective-Taking

Hoang Anh Just, Mahavir Dabas, Lifu Huang, Ming Jin, Ruoxi Jia


Abstract
Existing work on improving language model reasoning typically explores a single solution path, which can be prone to errors. Inspired by perspective-taking in social studies, this paper introduces DiPT, a novel approach that complements current reasoning methods by explicitly incorporating diversified viewpoints. This approach allows the model to gain a deeper understanding of the problem’s context and identify the most effective solution path during the inference stage. Additionally, it provides a general data-centric AI recipe for augmenting existing data to improve their quality for fine-tuning. Our empirical results demonstrate that DiPT can be flexibly integrated into existing methods that focus on a single reasoning approach, enhancing their reasoning performance and stability when presented with paraphrased problems. Furthermore, we illustrate improved context understanding by maintaining the model’s safe outputs against “jailbreaking” prompts intentionally designed to bypass safeguards built into deployed models. Lastly, we show that fine-tuning with data enriched with diverse perspectives can boost the reasoning capabilities of the model compared to fine-tuning with raw data alone.
Anthology ID:
2025.findings-naacl.356
Volume:
Findings of the Association for Computational Linguistics: NAACL 2025
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
6344–6374
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.356/
DOI:
Bibkey:
Cite (ACL):
Hoang Anh Just, Mahavir Dabas, Lifu Huang, Ming Jin, and Ruoxi Jia. 2025. DiPT: Enhancing LLM Reasoning through Diversified Perspective-Taking. In Findings of the Association for Computational Linguistics: NAACL 2025, pages 6344–6374, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
DiPT: Enhancing LLM Reasoning through Diversified Perspective-Taking (Just et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.findings-naacl.356.pdf