Low-Perplexity LLM-Generated Sequences and Where To Find Them

Arthur Wuhrmann, Andrei Kucharavy, Anastasiia Kucherenko


Abstract
As Large Language Models (LLMs) become increasingly widespread, understanding how specific training data shapes their outputs is crucial for transparency, accountability, privacy, and fairness. To explore how LLMs leverage and replicate their training data, we introduce a systematic approach centered on analyzing low-perplexity sequences—high-probability text spans generated by the model. Our pipeline reliably extracts such long sequences across diverse topics while avoiding degeneration, then traces them back to their sources in the training data. Surprisingly, we find that a substantial portion of these low-perplexity spans cannot be mapped to the corpus. For those that do match, we quantify the distribution of occurrences across source documents, highlighting the scope and nature of verbatim recall and paving a way toward better understanding of how LLMs training data impacts their behavior.
Anthology ID:
2025.acl-srw.51
Volume:
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop)
Month:
July
Year:
2025
Address:
Vienna, Austria
Editors:
Jin Zhao, Mingyang Wang, Zhu Liu
Venues:
ACL | WS
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
774–783
Language:
URL:
https://preview.aclanthology.org/landing_page/2025.acl-srw.51/
DOI:
Bibkey:
Cite (ACL):
Arthur Wuhrmann, Andrei Kucharavy, and Anastasiia Kucherenko. 2025. Low-Perplexity LLM-Generated Sequences and Where To Find Them. In Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 4: Student Research Workshop), pages 774–783, Vienna, Austria. Association for Computational Linguistics.
Cite (Informal):
Low-Perplexity LLM-Generated Sequences and Where To Find Them (Wuhrmann et al., ACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/landing_page/2025.acl-srw.51.pdf