Koala: An Index for Quantifying Overlaps with Pre-training Corpora

Thuy-Trang Vu, Xuanli He, Gholamreza Haffari, Ehsan Shareghi


Abstract
In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pre-training corpora using lossless compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B, GPT-3, GPT-Neo, GPT-Neo, LLaMA, BERT, ELECTRA, RoBERTA, XLNet pre-training corpora. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https://koala-index.erc.monash.edu/.
Anthology ID:
2023.emnlp-demo.7
Volume:
Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations
Month:
December
Year:
2023
Address:
Singapore
Editors:
Yansong Feng, Els Lefever
Venue:
EMNLP
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
90–98
Language:
URL:
https://aclanthology.org/2023.emnlp-demo.7
DOI:
10.18653/v1/2023.emnlp-demo.7
Bibkey:
Cite (ACL):
Thuy-Trang Vu, Xuanli He, Gholamreza Haffari, and Ehsan Shareghi. 2023. Koala: An Index for Quantifying Overlaps with Pre-training Corpora. In Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 90–98, Singapore. Association for Computational Linguistics.
Cite (Informal):
Koala: An Index for Quantifying Overlaps with Pre-training Corpora (Vu et al., EMNLP 2023)
Copy Citation:
PDF:
https://preview.aclanthology.org/add_acl24_videos/2023.emnlp-demo.7.pdf
Video:
 https://preview.aclanthology.org/add_acl24_videos/2023.emnlp-demo.7.mp4