Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning

Zhanming Jie, Wei Lu


Abstract
Chain-of-thought (CoT) prompting with large language models has proven effective in numerous natural language process tasks, but designing prompts that generalize well to diverse problem types can be challenging CITATION, especially in the context of math word problem solving. Additionally, it is common to have a large amount of training data that have a better diversity coverage but CoT annotations are not available, which limits the use of supervised learning techniques. To address these issues, we investigate two approaches to leverage the training data in few-shot prompting scenario: dynamic program prompting and program distillation.Our approach is largely inspired by CITATION where they proposed to replace the CoT with the programs as the intermediate reasoning step. Such a prompting strategy allows us to accurately verify the answer correctness through program execution in MWP solving.Our dynamic program prompting involves annotating the training data by sampling correct programs from a large language model, while program distillation involves adapting a smaller model to the program-annotated training data.Our experiments on three standard MWP datasets demonstrate the effectiveness of these approaches, yielding significant improvements over previous baselines for prompting and fine-tuning.Our results suggest that leveraging a large amount of training data can improve the generalization ability of prompts and boost the performance of fine-tuned smaller models in MWP solving.
Anthology ID:
2023.findings-acl.668
Volume:
Findings of the Association for Computational Linguistics: ACL 2023
Month:
July
Year:
2023
Address:
Toronto, Canada
Editors:
Anna Rogers, Jordan Boyd-Graber, Naoaki Okazaki
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10518–10526
Language:
URL:
https://aclanthology.org/2023.findings-acl.668
DOI:
10.18653/v1/2023.findings-acl.668
Bibkey:
Cite (ACL):
Zhanming Jie and Wei Lu. 2023. Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning. In Findings of the Association for Computational Linguistics: ACL 2023, pages 10518–10526, Toronto, Canada. Association for Computational Linguistics.
Cite (Informal):
Leveraging Training Data in Few-Shot Prompting for Numerical Reasoning (Jie & Lu, Findings 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/proper-vol2-ingestion/2023.findings-acl.668.pdf