Jianwei Zhang


2025

pdf bib
FPE2M2: Approaching Lossless and Efficient Quantization with Native Floating Point
Ke Yi | Jianwei Zhang | Zhiying Xu | Xinlong Yang | Yang Zhou | Minmin Sun | Zengke Liu | Tong Zhang | Junyang Lin | Jingren Zhou
Findings of the Association for Computational Linguistics: ACL 2025

Auto-regressive decoding is a memory-bound job, meaning decoding inference performance is limited by the bandwidth rather than the computational capabilities of the GPU. Weight-only quantization is a promising method to address the memory-bound limitations. Previous studies have followed one of two approaches. Some have exclusively studied integer quantization while ignoring the Gaussian distribution nature of LLMs’ weights. Others have proposed non-uniform quantization but incurred additional I/O overhead due to lookup tables, e.g. NF4. In this work, we extend the IEEE 754 float-point standard to the ExMy quantization schema, which allocates x bit for the exponent and y bit for the mantissa to represent a number. In terms of runtime efficiency, we demonstrate that the conversion from ExMy to FP16 can be realized through register-level operations, which can get almost the same performance as INT5. In terms of quantization loss, we analyze that of different ExMy settings, where the E2M2 schema achieves an optimal balance, offering the highest efficiency with lossless accuracy. We further propose the FPE2M2 framework that supports lossless weight-only quantization inference and validate the FPE2M2 framework on Qwen and LLaMA Models across various modalities, such as text, image, and audio tasks, which achieves a faster inference speed while maintaining nearly lossless accuracy.

2024

pdf bib
Emotion Aggregation in Artistic Image Analysis: Effects of Label Distribution Learning
Ryuichi Takahashi | Yuta Sasaki | Yuhki Shiraishi | Jianwei Zhang
Proceedings of the 38th Pacific Asia Conference on Language, Information and Computation

2023

pdf bib
JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization
Shang-Ching Liu | ShengKun Wang | Tsungyao Chang | Wenqi Lin | Chung-Wei Hsiung | Yi-Chen Hsieh | Yu-Ping Cheng | Sian-Hong Luo | Jianwei Zhang
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track

In this study, we introduce JarviX, a sophisticated data analytics framework. JarviX is designed to employ Large Language Models (LLMs) to facilitate an automated guide and execute high-precision data analyzes on tabular datasets. This framework emphasizes the significance of varying column types, capitalizing on state-of-the-art LLMs to generate concise data insight summaries, propose relevant analysis inquiries, visualize data effectively, and provide comprehensive explanations for results drawn from an extensive data analysis pipeline. Moreover, JarviX incorporates an automated machine learning (AutoML) pipeline for predictive modeling. This integration forms a comprehensive and automated optimization cycle, which proves particularly advantageous for optimizing machine configuration. The efficacy and adaptability of JarviX are substantiated through a series of practical use case studies.