Abstract
Extensive research exists on the performance of large language models on logic-based tasks, whereas relatively little has been done on their ability to generate creative solutions on lateral thinking tasks. The BrainTeaser shared task tests lateral thinking and uses adversarial datasets to prevent memorization, resulting in poor performance for out-of-the-box models. We propose a system for iterative, chain-of-thought prompt engineering which optimizes prompts using human evaluation. Using this shared task, we demonstrate our system’s ability to significantly improve model performance by optimizing prompts and evaluate the input dataset.- Anthology ID:
- 2024.semeval-1.263
- Volume:
- Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024)
- Month:
- June
- Year:
- 2024
- Address:
- Mexico City, Mexico
- Editors:
- Atul Kr. Ojha, A. Seza Doğruöz, Harish Tayyar Madabushi, Giovanni Da San Martino, Sara Rosenthal, Aiala Rosá
- Venue:
- SemEval
- SIG:
- SIGLEX
- Publisher:
- Association for Computational Linguistics
- Note:
- Pages:
- 1876–1888
- Language:
- URL:
- https://aclanthology.org/2024.semeval-1.263
- DOI:
- 10.18653/v1/2024.semeval-1.263
- Cite (ACL):
- Alvin Po-Chun Chen, Ray Groshan, and Sean Von Bayern. 2024. Mothman at SemEval-2024 Task 9: An Iterative System for Chain-of-Thought Prompt Optimization. In Proceedings of the 18th International Workshop on Semantic Evaluation (SemEval-2024), pages 1876–1888, Mexico City, Mexico. Association for Computational Linguistics.
- Cite (Informal):
- Mothman at SemEval-2024 Task 9: An Iterative System for Chain-of-Thought Prompt Optimization (Chen et al., SemEval 2024)
- PDF:
- https://preview.aclanthology.org/nschneid-patch-4/2024.semeval-1.263.pdf