Youngser Park
2025
Statistical inference on black-box generative models in the data kernel perspective space
Hayden Helm
|
Aranyak Acharyya
|
Youngser Park
|
Brandon Duderstadt
|
Carey Priebe
Findings of the Association for Computational Linguistics: ACL 2025
Generative models are capable of producing human-expert level content across a variety of topics and domains. As the impact of generative models grows, it is necessary to develop statistical methods to understand collections of available models. These methods are particularly important in settings where the user may not have access to information related to a model’s pre-training data, weights, or other relevant model-level covariates. In this paper we extend recent results on representations of black-box generative models to model-level statistical inference tasks. We demonstrate that the model-level representations are effective for multiple inference tasks.
2024
Tracking the perspectives of interacting language models
Hayden Helm
|
Brandon Duderstadt
|
Youngser Park
|
Carey Priebe
Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) are capable of producing high quality information at unprecedented rates. As these models continue to entrench themselves in society, the content they produce will become increasingly pervasive in databases that are, in turn, incorporated into the pre-training data, fine-tuning data, retrieval data, etc. of other language models. In this paper we formalize the idea of a communication network of LLMs and introduce a method for representing the perspective of individual models within a collection of LLMs. Given these tools we systematically study information diffusion in the communication network of LLMs in various simulated settings.
2021
An Analysis of Euclidean vs. Graph-Based Framing for Bilingual Lexicon Induction from Word Embedding Spaces
Kelly Marchisio
|
Youngser Park
|
Ali Saad-Eldin
|
Anton Alyakin
|
Kevin Duh
|
Carey Priebe
|
Philipp Koehn
Findings of the Association for Computational Linguistics: EMNLP 2021
Much recent work in bilingual lexicon induction (BLI) views word embeddings as vectors in Euclidean space. As such, BLI is typically solved by finding a linear transformation that maps embeddings to a common space. Alternatively, word embeddings may be understood as nodes in a weighted graph. This framing allows us to examine a node’s graph neighborhood without assuming a linear transform, and exploits new techniques from the graph matching optimization literature. These contrasting approaches have not been compared in BLI so far. In this work, we study the behavior of Euclidean versus graph-based approaches to BLI under differing data conditions and show that they complement each other when combined. We release our code at https://github.com/kellymarchisio/euc-v-graph-bli.
Search
Fix author
Co-authors
- Carey Priebe 3
- Brandon Duderstadt 2
- Hayden Helm 2
- Aranyak Acharyya 1
- Anton Alyakin 1
- show all...