Byron Bischoff


Fixing paper assignments

  1. Please select all papers that belong to the same person.
  2. Indicate below which author they should be assigned to.
Provide a valid ORCID iD here. This will be used to match future papers to this author.
Provide the name of the school or the university where the author has received or will receive their highest degree (e.g., Ph.D. institution for researchers, or current affiliation for students). This will be used to form the new author page ID, if needed.

TODO: "submit" and "cancel" buttons here


2025

pdf bib
OLMoTrace: Tracing Language Model Outputs Back to Trillions of Training Tokens
Jiacheng Liu | Taylor Blanton | Yanai Elazar | Sewon Min | Yen-Sung Chen | Arnavi Chheda-Kothary | Huy Tran | Byron Bischoff | Eric Marsh | Michael Schmitz | Cassidy Trier | Aaron Sarnat | Jenna James | Jon Borchardt | Bailey Kuehl | Evie Yu-Yen Cheng | Karen Farley | Taira Anderson | David Albright | Carissa Schoenick | Luca Soldaini | Dirk Groeneveld | Rock Yuren Pang | Pang Wei Koh | Noah A. Smith | Sophie Lebrecht | Yejin Choi | Hannaneh Hajishirzi | Ali Farhadi | Jesse Dodge
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)

We present OLMoTrace, the first system that traces the outputs of language models back to their full, multi-trillion-token training data in real time. OLMoTrace finds and shows verbatim matches between segments of language model output and documents in the training text corpora. Powered by an extended version of infini-gram (Liu et al., 2024), our system returns tracing results within a few seconds. OLMoTrace can help users understand the behavior of language models through the lens of their training data. We showcase how it can be used to explore fact checking, hallucination, and the creativity of language models. OLMoTrace is publicly available and fully open-source.