Wenyang Hu
2025
Dipper: Diversity in Prompts for Producing Large Language Model Ensembles in Reasoning Tasks
Wenyang Hu
|
Gregory Kang Ruey Lau
|
Liu Diwen
|
Chen Jizhuo
|
See-Kiong Ng
|
Bryan Kian Hsiang Low
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large Language Models (LLMs), particularly smaller variants, still struggle with complex reasoning tasks. While inference-time prompting can guide reasoning, existing methods often rely on sequential queries. Ensemble approaches offer a promising path to performance gains, especially given recent batch inference speed-ups. This work introduces DIPPER, a novel, training-free framework that transforms a single LLM into an effective inference-time ensemble. By feeding the model an optimized and diverse set of prompts in parallel, DIPPER elicits varied reasoning paths, leading to performance gains. We empirically demonstrate significant improvements on mathematical reasoning benchmarks, such as MATH, where a DIPPER ensemble of three Qwen2-MATH-1.5B instances (via parallel prompting of a single model) outperforms a larger Qwen2-MATH-7B model.
2024
Position Paper: Data-Centric AI in the Age of Large Language Models
Xinyi Xu
|
Zhaoxuan Wu
|
Rui Qiao
|
Arun Verma
|
Yao Shu
|
Jingtan Wang
|
Xinyuan Niu
|
Zhenfeng He
|
Jiangwei Chen
|
Zijian Zhou
|
Gregory Kang Ruey Lau
|
Hieu Dao
|
Lucas Agussurja
|
Rachael Hwee Ling Sim
|
Xiaoqiang Lin
|
Wenyang Hu
|
Zhongxiang Dai
|
Pang Wei Koh
|
Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2024
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making a key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and advocate that data-centric research should receive more attention from the community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research.
Search
Fix author
Co-authors
- Gregory Kang Ruey Lau 2
- Bryan Kian Hsiang Low 2
- Lucas Agussurja 1
- Jiangwei Chen 1
- Zhongxiang Dai 1
- show all...