Hao Zhang
Other people with similar names: Hao Zhang (Rochester), Hao Zhang , Hao Zhang , Hao Zhang , Hao Zhang
2025
SolEval: Benchmarking Large Language Models for Repository-level Solidity Smart Contract Generation
Zhiyuan Peng
|
Xin Yin
|
Rui Qian
|
Peiqin Lin
|
YongKang Liu
|
Hao Zhang
|
Chenhao Ying
|
Yuan Luo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have transformed code generation.However, most existing approaches focus on mainstream languages such as Python and Java, neglecting the Solidity language, the predominant programming language for Ethereum smart contracts.Due to the lack of adequate benchmarks for Solidity, LLMs’ ability to generate secure, cost-effective smart contracts remains unexplored.To fill this gap, we construct SolEval, the first repository-level benchmark designed for Solidity smart contract generation, to evaluate the performance of LLMs on Solidity.SolEval consists of 1,507 samples from 28 different repositories, covering 6 popular domains, providing LLMs with a comprehensive evaluation benchmark.Unlike the existing Solidity benchmark, SolEval not only includes complex function calls but also reflects the real-world complexity of the Ethereum ecosystem by incorporating Gas@k and Vul@k.We evaluate 16 LLMs on SolEval, and our results show that the best-performing LLM achieves only 26.29% Pass@10, highlighting substantial room for improvement in Solidity code generation by LLMs.Additionally, we conduct supervised fine-tuning (SFT) on Qwen-7B using SolEval, resulting in a significant performance improvement, with Pass@5 increasing from 16.67% to 58.33%, demonstrating the effectiveness of fine-tuning LLMs on our benchmark.We release our data and code at https://github.com/pzy2000/SolEval.
2024
An Instruction Tuning-Based Contrastive Learning Framework for Aspect Sentiment Quad Prediction with Implicit Aspects and Opinions
Hao Zhang
|
Yu-N Cheah
|
Congqing He
|
Feifan Yi
Findings of the Association for Computational Linguistics: EMNLP 2024
Aspect sentiment quad prediction (ASQP) is crucial in aspect-based sentiment analysis (ABSA). It involves identifying a text’s aspect,sentiment, opinion, and category. Existing methods have insufficiently explored how to effectively leverage the knowledge of pre-trainedlanguage models (PLMs) to handle implicit aspects and opinions, particularly in combinations such as implicit aspect & explicit opinion, explicit aspect & implicit opinion, and implicit aspect & implicit opinion. We introduce ITSCL, a framework leveraging Instruction Tuning and Supervised Contrastive Learning to improve aspect sentiment quad predictions, especially for implicit aspects and opinions. Implementing this approach presents several challenges. First, designing effective instructions and prompts to optimize the model’s training is difficult. Second, creating sentiment combination vectors with contrastive learning to enhance the model’s discrimination requires further investigation. To address these challenges, ITSCL combines instruction tuning with aligned PLM templates, enabling better knowledge acquisition and identification of implicit sentiments. Additionally, the contrastive learning framework enhances performance by using four fully connected layers to combine sentiments, aspects, opinions, and combinations, maximizing similarity for same-label representationsand minimizing it for different labels. Experimental results show our method significantly outperforms previous methods on benchmark datasets.