Tianhao Zhang
2025
BBScoreV2: Learning Time-Evolution and Latent Alignment from Stochastic Representation
Tianhao Zhang
|
Zhecheng Sheng
|
Zhexiao Lin
|
Chen Jiang
|
Dongyeop Kang
Proceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Autoregressive generative models play a key role in various language tasks, especially for modeling and evaluating long text sequences. While recent methods leverage stochastic representations to better capture sequence dynamics, encoding both temporal and structural dependencies and utilizing such information for evaluation remains challenging. In this work, we observe that fitting transformer-based model embeddings into a stochastic process yields ordered latent representations from originally unordered model outputs. Building on this insight and prior work, we theoretically introduce a novel likelihood-based evaluation metric BBScoreV2. Empirically, we demonstrate that the stochastic latent space induces a “clustered-to-temporal ordered” mapping of language model representations in high-dimensional space, offering both intuitive and quantitative support for the effectiveness of BBScoreV2. Furthermore, this structure aligns with intrinsic properties of natural language and enhances performance on tasks such as temporal consistency evaluation (e.g., Shuffle tasks) and AI-generated content detection.
Collage: Decomposable Rapid Prototyping for Co-Designed Information Extraction on Scientific PDFs
Sireesh Gururaja
|
Yueheng Zhang
|
Guannan Tang
|
Tianhao Zhang
|
Kevin Murphy
|
Yu-Tsen Yi
|
Junwon Seo
|
Anthony Rollett
|
Emma Strubell
Proceedings of the Fifth Workshop on Scholarly Document Processing (SDP 2025)
Recent years in NLP have seen the continued development of domain-specific information extraction tools for scientific documents, alongside the release of increasingly multimodal pretrained language models. While applying and evaluating these new, general-purpose language model systems in specialized domains has never been easier, it remains difficult to compare them with models developed specifically for those domains, which tend to accept a narrower range of input formats, and are difficult to evaluate in the context of the original documents. Meanwhile, the general-purpose systems are often black-box and give little insight into preprocessing (like conversion to plain text or markdown) that can have significant downstream impact on their results.In this work, we present Collage, a tool intended to facilitate the co-design of information extraction systems on scientific PDFs between NLP developers and scientists by facilitating the rapid prototyping, visualization, and comparison of different information extraction models on scientific PDFs, regardless of their input modality. For scientists, Collage provides side-by-side visualization and comparison of multiple models of different input and output modalities in the context of the PDF content they are applied to; for developers, Collage allows the rapid deployment of new models by abstracting away PDF preprocessing and visualization into easily extensible software interfaces. Further, we enable both developers and scientists to inspect, debug, and better understand modeling pipelines by providing granular views of intermediate states of processing. We demonstrate our system in the context of information extraction to assist with literature review in materials science.
Search
Fix author
Co-authors
- Sireesh Gururaja 1
- Chen Jiang 1
- Dongyeop Kang 1
- Zhexiao Lin 1
- Kevin Murphy 1
- show all...