Craw4LLM: Efficient Web Crawling for LLM Pretraining

Shi Yu, Zhiyuan Liu, Chenyan Xiong


Abstract
Web crawl is a main source of large language models’ (LLMs) pretraining data, but the majority of crawled web pages are discarded in pretraining due to low data quality. This paper presents Craw4LLM, an efficient web crawling method that explores the web graph based on the preference of LLM pretraining. Specifically, it leverages the influence of a webpage in LLM pretraining as the priority score of the web crawler’s scheduler, replacing the standard graph-connectivity-based priority. Our experiments on a web graph containing 900 million webpages from a commercial search engine’s index demonstrate the efficiency of Craw4LLM in obtaining high-quality pretraining data. With just 21% URLs crawled, LLMs pretrained on Craw4LLM data reach the same downstream performances of previous crawls, significantly reducing the crawling waste and alleviating the burdens on websites. Our code is publicly available at https://github.com/cxcscmu/Craw4LLM.
Anthology ID:
2025.findings-acl.712
Volume:
Findings of the Association for Computational Linguistics: ACL 2025
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Wanxiang Che, Joyce Nabende, Ekaterina Shutova, Mohammad Taher Pilehvar
Venue:
Findings
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
13843–13851
Language:
URL:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.712/
DOI:
Bibkey:
Cite (ACL):
Shi Yu, Zhiyuan Liu, and Chenyan Xiong. 2025. Craw4LLM: Efficient Web Crawling for LLM Pretraining. In Findings of the Association for Computational Linguistics: ACL 2025, pages 13843–13851, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Craw4LLM: Efficient Web Crawling for LLM Pretraining (Yu et al., Findings 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/display_plenaries/2025.findings-acl.712.pdf