Private Synthetic Text Generation with Diffusion Models

Sebastian Ochs, Ivan Habernal


Abstract
How capable are diffusion models of generating synthetics texts? Recent research shows their strengths, with performance reaching that of auto-regressive LLMs. But are they also good in generating synthetic data if the training was under differential privacy? Here the evidence is missing, yet the promises from private image generation look strong. In this paper we address this open question by extensive experiments. At the same time, we critically assess (and reimplement) previous works on synthetic private text generation with LLMs and reveal some unmet assumptions that might have led to violating the differential privacy guarantees. Our results partly contradict previous non-private findings and show that fully open-source LLMs outperform diffusion models in the privacy regime. Our complete source codes, datasets, and experimental setup is publicly available to foster future research.
Anthology ID:
2025.naacl-long.532
Volume:
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Month:
April
Year:
2025
Address:
Albuquerque, New Mexico
Editors:
Luis Chiruzzo, Alan Ritter, Lu Wang
Venue:
NAACL
SIG:
Publisher:
Association for Computational Linguistics
Note:
Pages:
10612–10626
Language:
URL:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.532/
DOI:
Bibkey:
Cite (ACL):
Sebastian Ochs and Ivan Habernal. 2025. Private Synthetic Text Generation with Diffusion Models. In Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers), pages 10612–10626, Albuquerque, New Mexico. Association for Computational Linguistics.
Cite (Informal):
Private Synthetic Text Generation with Diffusion Models (Ochs & Habernal, NAACL 2025)
Copy Citation:
PDF:
https://preview.aclanthology.org/fix-sig-urls/2025.naacl-long.532.pdf