Jizhi Zhang


2024

pdf
GeoGPT4V: Towards Geometric Multi-modal Large Language Models with Geometric Image Generation
Shihao Cai | Keqin Bao | Hangyu Guo | Jizhi Zhang | Jun Song | Bo Zheng
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Large language models have seen widespread adoption in math problem-solving, yet for geometry problems, which often necessitate visual aids even for humans, the most advanced multi-modal models still struggle to effectively utilize image information. High-quality data is crucial for enhancing the geometric capabilities of multi-modal models, yet existing open-source datasets and related efforts are either too challenging for direct model learning or suffer from misalignment between text and images. To overcome this issue, we introduce a novel pipeline that leverages GPT-4 and GPT-4V to generate relatively basic geometry problems with aligned text and images, facilitating model learning. We have produced a dataset of 4.9K geometry problems and combined it with 19K open-source data to form our GeoGPT4V dataset. Experimental results demonstrate that the GeoGPT4V dataset significantly improves the geometry performance of various models on the MathVista and MathVision benchmarks. The code is available at https://anonymous.4open.science/r/GeoGPT4V-08B2.

pdf
Decoding Matters: Addressing Amplification Bias and Homogeneity Issue in Recommendations for Large Language Models
Keqin Bao | Jizhi Zhang | Yang Zhang | Xinyue Huo | Chong Chen | Fuli Feng
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing

Adapting Large Language Models (LLMs) for recommendation requires careful consideration of the decoding process, given the inherent differences between generating items and natural language. Existing approaches often directly apply LLMs’ original decoding methods. However, we find these methods encounter significant challenges: 1) amplification bias—where standard length normalization inflates scores for items containing tokens with generation probabilities close to 1 (termed ghost tokens), and 2) homogeneity issue—generating multiple similar or repetitive items for a user. To tackle these challenges, we introduce a new decoding approach named Debiasing-Diversifying Decoding (D3). D3 disables length normalization for ghost tokens to alleviate amplification bias, and it incorporates a text-free assistant model to encourage tokens less frequently generated by LLMs for counteracting recommendation homogeneity. Extensive experiments on real-world datasets demonstrate the method’s effectiveness in enhancing accuracy and diversity.

2023

pdf
Robust Prompt Optimization for Large Language Models Against Distribution Shifts
Moxin Li | Wenjie Wang | Fuli Feng | Yixin Cao | Jizhi Zhang | Tat-Seng Chua
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing

Large Language Model (LLM) has demonstrated significant ability in various Natural Language Processing tasks. However, their effectiveness is highly dependent on the phrasing of the task prompt, leading to research on automatic prompt optimization using labeled task data. We reveal that these prompt optimization techniques are vulnerable to distribution shifts such as subpopulation shifts, which are common for LLMs in real-world scenarios such as customer reviews analysis. In this light, we propose a new problem of robust prompt optimization for LLMs against distribution shifts, which requires the prompt optimized over the labeled source group can simultaneously generalize to an unlabeled target group. To solve this problem, we propose Generalized Prompt Optimization framework , which incorporates the unlabeled data from the target group into prompt optimization. Extensive experimental results demonstrate the effectiveness of the proposed framework with significant performance improvement on the target group and comparable performance on the source group.

2021

pdf
Empowering Language Understanding with Counterfactual Reasoning
Fuli Feng | Jizhi Zhang | Xiangnan He | Hanwang Zhang | Tat-Seng Chua
Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021