Tan Wang
2025
Relevant or Random: Can LLMs Truly Perform Analogical Reasoning?
Chengwei Qin
|
Wenhan Xia
|
Tan Wang
|
Fangkai Jiao
|
Yuchen Hu
|
Bosheng Ding
|
Ruirui Chen
|
Shafiq Joty
Findings of the Association for Computational Linguistics: ACL 2025
Analogical reasoning is a unique ability of humans to address unfamiliar challenges by transferring strategies from relevant past experiences. One key finding in psychology is that compared with irrelevant past experiences, recalling relevant ones can help humans better handle new tasks. Coincidentally, the NLP community has also recently found that self-generating relevant examples in the context can help large language models (LLMs) better solve a given problem than hand-crafted prompts. However, it is yet not clear whether relevance is the key factor eliciting such capability, i.e., can LLMs benefit more from self-generated relevant examples than irrelevant ones? In this work, we systematically explore whether LLMs can truly perform analogical reasoning on a diverse set of reasoning tasks. With extensive experiments and analysis, we show that self-generated random examples can surprisingly achieve comparable or even better performance on certain tasks, e.g., 4% performance boost on GSM8K with random biological examples. We find that the accuracy of self-generated examples is the key factor and subsequently design two novel methods with improved performance and significantly reduced inference costs. Overall, we aim to advance a deeper understanding of LLM analogical reasoning and hope this work stimulates further research in the design of self-generated contexts.
2024
Explaining Language Model Predictions with High-Impact Concepts
Ruochen Zhao
|
Tan Wang
|
Yongjie Wang
|
Shafiq Joty
Findings of the Association for Computational Linguistics: EACL 2024
To encourage fairness and transparency, there exists an urgent demand for deriving reliable explanations for large language models (LLMs). One promising solution is concept-based explanations, i.e., human-understandable concepts from internal representations. However, due to the compositional nature of languages, current methods mostly discover correlational explanations instead of causal features. Therefore, we propose a novel framework to provide impact-aware explanations for users to understand the LLM’s behavior, which are robust to feature changes and influential to the model’s predictions. Specifically, we extract predictive high-level features (concepts) from the model’s hidden layer activations. Then, we innovatively optimize for features whose existence causes the output predictions to change substantially. Extensive experiments on real and synthetic tasks demonstrate that our method achieves superior results on predictive impact, explainability, and faithfulness compared to the baselines, especially for LLMs.
Search
Fix author
Co-authors
- Shafiq Joty 2
- Ruirui Chen 1
- Bosheng Ding 1
- Yuchen Hu 1
- Fangkai Jiao 1
- show all...