@inproceedings{senge-etal-2022-one,
    title = "One size does not fit all: Investigating strategies for differentially-private learning across {NLP} tasks",
    author = "Senge, Manuel  and
      Igamberdiev, Timour  and
      Habernal, Ivan",
    editor = "Goldberg, Yoav  and
      Kozareva, Zornitsa  and
      Zhang, Yue",
    booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
    month = dec,
    year = "2022",
    address = "Abu Dhabi, United Arab Emirates",
    publisher = "Association for Computational Linguistics",
    url = "https://preview.aclanthology.org/ingest-emnlp/2022.emnlp-main.496/",
    doi = "10.18653/v1/2022.emnlp-main.496",
    pages = "7340--7353",
    abstract = "Preserving privacy in contemporary NLP models allows us to work with sensitive data, but unfortunately comes at a price. We know that stricter privacy guarantees in differentially-private stochastic gradient descent (DP-SGD) generally degrade model performance. However, previous research on the efficiency of DP-SGD in NLP is inconclusive or even counter-intuitive. In this short paper, we provide an extensive analysis of different privacy preserving strategies on seven downstream datasets in five different `typical' NLP tasks with varying complexity using modern neural models based on BERT and XtremeDistil architectures. We show that unlike standard non-private approaches to solving NLP tasks, where bigger is usually better, privacy-preserving strategies do not exhibit a winning pattern, and each task and privacy regime requires a special treatment to achieve adequate performance."
}Markdown (Informal)
[One size does not fit all: Investigating strategies for differentially-private learning across NLP tasks](https://preview.aclanthology.org/ingest-emnlp/2022.emnlp-main.496/) (Senge et al., EMNLP 2022)
ACL