Trustworthiness and Self-awareness in Large Language Models: An Exploration through the Think-Solve-Verify Framework

Zhendong Liu, Changhong Xia, Wei He, Chongjun Wang


Abstract
As Large Language Models (LLMs) become increasingly influential in reasoning tasks, ensuring their trustworthiness and introspective self-awareness is critical. This research introduces the Think-Solve-Verify (TSV) framework, an innovative strategy tailored to explore LLMs’ trustworthiness, introspective self-awareness, and collaborative reasoning. This method accentuates a model’s capability to construct introspective reasoning processes from answers and ensure their trustworthiness. The reasoning with TSV consistently performs at or near the top across the majority of datasets with a single interaction with LLM. Moreover, we refine the voting process of self-consistency within the Chain-of-Thought (CoT) approach, leading to notable accuracy enhancements. In our evaluations, this approach improved performance from 67.3% to 72.8% on the AQuA dataset. Furthermore, we delve into the model’s ability to explain the given answers, highlighting the significance of discerning genuine comprehension from mere guesswork.
Anthology ID:
2024.lrec-main.1465
Volume:
Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)
Month:
May
Year:
2024
Address:
Torino, Italia
Editors:
Nicoletta Calzolari, Min-Yen Kan, Veronique Hoste, Alessandro Lenci, Sakriani Sakti, Nianwen Xue
Venues:
LREC | COLING
SIG:
Publisher:
ELRA and ICCL
Note:
Pages:
16855–16866
Language:
URL:
https://aclanthology.org/2024.lrec-main.1465
DOI:
Bibkey:
Cite (ACL):
Zhendong Liu, Changhong Xia, Wei He, and Chongjun Wang. 2024. Trustworthiness and Self-awareness in Large Language Models: An Exploration through the Think-Solve-Verify Framework. In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 16855–16866, Torino, Italia. ELRA and ICCL.
Cite (Informal):
Trustworthiness and Self-awareness in Large Language Models: An Exploration through the Think-Solve-Verify Framework (Liu et al., LREC-COLING 2024)
Copy Citation:
PDF:
https://preview.aclanthology.org/nschneid-patch-4/2024.lrec-main.1465.pdf
Optional supplementary material:
 2024.lrec-main.1465.OptionalSupplementaryMaterial.zip