Zijian Zhou
2025
TETRIS: Optimal Draft Token Selection for Batch Speculative Decoding
Zhaoxuan Wu
|
Zijian Zhou
|
Arun Verma
|
Alok Prakash
|
Daniela Rus
|
Bryan Kian Hsiang Low
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
We propose TETRIS, a novel method that optimizes the total throughput of batch speculative decoding in multi-request settings. Unlike existing methods that optimize for a single request or a group of requests as a whole, TETRIS actively selects the most promising draft tokens (for every request in a batch) to be accepted when verified in parallel, resulting in fewer rejected tokens and hence less wasted computing resources. Such an effective resource utilization to achieve fast inference in large language models (LLMs) is especially important to service providers with limited inference capacity. Compared to baseline speculative decoding, TETRIS yields a consistently higher acceptance rate and more effective utilization of the limited inference capacity. We show theoretically and empirically that TETRIS outperforms baseline speculative decoding and existing methods that dynamically select draft tokens, leading to a more efficient batch inference in LLMs.
2024
Position Paper: Data-Centric AI in the Age of Large Language Models
Xinyi Xu
|
Zhaoxuan Wu
|
Rui Qiao
|
Arun Verma
|
Yao Shu
|
Jingtan Wang
|
Xinyuan Niu
|
Zhenfeng He
|
Jiangwei Chen
|
Zijian Zhou
|
Gregory Kang Ruey Lau
|
Hieu Dao
|
Lucas Agussurja
|
Rachael Hwee Ling Sim
|
Xiaoqiang Lin
|
Wenyang Hu
|
Zhongxiang Dai
|
Pang Wei Koh
|
Bryan Kian Hsiang Low
Findings of the Association for Computational Linguistics: EMNLP 2024
This position paper proposes a data-centric viewpoint of AI research, focusing on large language models (LLMs). We start by making a key observation that data is instrumental in the developmental (e.g., pretraining and fine-tuning) and inferential stages (e.g., in-context learning) of LLMs, and advocate that data-centric research should receive more attention from the community. We identify four specific scenarios centered around data, covering data-centric benchmarks and data curation, data attribution, knowledge transfer, and inference contextualization. In each scenario, we underscore the importance of data, highlight promising research directions, and articulate the potential impacts on the research community and, where applicable, the society as a whole. For instance, we advocate for a suite of data-centric benchmarks tailored to the scale and complexity of data for LLMs. These benchmarks can be used to develop new data curation methods and document research efforts and results, which can help promote openness and transparency in AI and LLM research.
Search
Fix author
Co-authors
- Bryan Kian Hsiang Low 2
- Arun Verma 2
- Zhaoxuan Wu 2
- Lucas Agussurja 1
- Jiangwei Chen 1
- show all...