Manveer Singh Tamber
2025
Can’t Hide Behind the API: Stealing Black-Box Commercial Embedding Models
Manveer Singh Tamber
|
Jasper Xian
|
Jimmy Lin
Findings of the Association for Computational Linguistics: NAACL 2025
Embedding models that generate dense vector representations of text are widely used and hold significant commercial value. Companies such as OpenAI and Cohere offer proprietary embedding models via paid APIs, but despite being “hidden” behind APIs, these models are not protected from theft. We present, to our knowledge, the first effort to “steal” these models for retrieval by training thief models on text–embedding pairs obtained from the APIs. Our experiments demonstrate that it is possible to replicate the retrieval effectiveness of commercial embedding models with a cost of under $300. Notably, our methods allow for distilling from multiple teachers into a single robust student model, and for distilling into presumably smaller models with fewer dimension vectors, yet competitive retrieval effectiveness. Our findings raise important considerations for deploying commercial embedding models and suggest measures to mitigate the risk of model theft.
FaithBench: A Diverse Hallucination Benchmark for Summarization by Modern LLMs
Forrest Sheng Bao
|
Miaoran Li
|
Renyi Qu
|
Ge Luo
|
Erana Wan
|
Yujia Tang
|
Weisi Fan
|
Manveer Singh Tamber
|
Suleman Kazi
|
Vivek Sourabh
|
Mike Qi
|
Ruixuan Tu
|
Chenyu Xu
|
Matthew Gonzales
|
Ofer Mendelevitch
|
Amin Ahmad
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 2: Short Papers)
Summarization is one of the most common tasks performed by large language models (LLMs), especially in applications like Retrieval-Augmented Generation (RAG). However, existing evaluations of hallucinations in LLM-generated summaries, and evaluations of hallucination detection models both suffer from a lack of diversity and recency in the LLM and LLM families considered. This paper introduces FaithBench, a summarization hallucination benchmark comprising challenging hallucinations made by 10 modern LLMs from 8 different families, with ground truth annotations by human experts. “Challenging” here means summaries on which popular, state-of-the-art hallucination detection models, including GPT-4o-as-a-judge, disagreed on. Our results show GPT-4o and GPT-3.5-Turbo produce the least hallucinations. However, most state-of-the-art hallucination detection models have near 50% accuracies on FaithBench, indicating lots of room for future improvement.
Search
Fix data
Co-authors
- Amin Ahmad 1
- Forrest Bao 1
- Weisi Fan 1
- Matthew Gonzales 1
- Suleman Kazi 1
- show all...