Xin Yin
2025
SolEval: Benchmarking Large Language Models for Repository-level Solidity Smart Contract Generation
Zhiyuan Peng
|
Xin Yin
|
Rui Qian
|
Peiqin Lin
|
YongKang Liu
|
Hao Zhang
|
Chenhao Ying
|
Yuan Luo
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) have transformed code generation.However, most existing approaches focus on mainstream languages such as Python and Java, neglecting the Solidity language, the predominant programming language for Ethereum smart contracts.Due to the lack of adequate benchmarks for Solidity, LLMs’ ability to generate secure, cost-effective smart contracts remains unexplored.To fill this gap, we construct SolEval, the first repository-level benchmark designed for Solidity smart contract generation, to evaluate the performance of LLMs on Solidity.SolEval consists of 1,507 samples from 28 different repositories, covering 6 popular domains, providing LLMs with a comprehensive evaluation benchmark.Unlike the existing Solidity benchmark, SolEval not only includes complex function calls but also reflects the real-world complexity of the Ethereum ecosystem by incorporating Gas@k and Vul@k.We evaluate 16 LLMs on SolEval, and our results show that the best-performing LLM achieves only 26.29% Pass@10, highlighting substantial room for improvement in Solidity code generation by LLMs.Additionally, we conduct supervised fine-tuning (SFT) on Qwen-7B using SolEval, resulting in a significant performance improvement, with Pass@5 increasing from 16.67% to 58.33%, demonstrating the effectiveness of fine-tuning LLMs on our benchmark.We release our data and code at https://github.com/pzy2000/SolEval.
Pre-training CLIP against Data Poisoning with Optimal Transport-based Matching and Alignment
Tong Zhang
|
Kuofeng Gao
|
Jiawang Bai
|
Leo Yu Zhang
|
Xin Yin
|
Zonghui Wang
|
Shouling Ji
|
Wenzhi Chen
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Recent studies have shown that Contrastive Language-Image Pre-training (CLIP) models are threatened by targeted data poisoning and backdoor attacks due to massive training image-caption pairs crawled from the Internet. Previous defense methods correct poisoned image-caption pairs by matching a new caption for each image. However, the matching process solely relies on the global representations of images and captions, overlooking fine-grained features of visual and textual features. It may introduce incorrect image-caption pairs and detriment the CLIP pre-training. To address their limitations, we propose an Optimal Transport-based framework to reconstruct the image-caption pairs, named OTCCLIP. We involve a new optimal transport-based distance measure between fine-grained visual and textual feature sets and re-assign new captions based on the proposed optimal transport distance. Additionally, to further reduce the negative impact of mismatched pairs, we encourage the inter- and intra-modality fine-grained alignment by employing optimal transport-based objective functions. Our experiments demonstrate that OTCCLIP can successfully decrease the attack success rates of poisoning attacks to 0% in most cases. Also, compared to previous methods, OTCCLIPsignificantly improves CLIP’s zero-shot and linear probing performance trained on poisoned datasets.
Search
Fix author
Co-authors
- Jiawang Bai 1
- Wenzhi Chen 1
- Kuofeng Gao 1
- Shouling Ji 1
- Peiqin Lin 1
- show all...