Tudor Cebere
@tcebere.bsky.social
36 followers 140 following 1 posts
PhD student in differential privacy & learning at Inria 🇫🇷
Posts Media Videos Starter Packs
Reposted by Tudor Cebere
differentialprivacy.org
Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning

Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot

http://arxiv.org/abs/2506.12553
Beyond Laplace and Gaussian: Exploring the Generalized Gaussian Mechanism for Private Machine Learning

Roy Rinberg, Ilia Shumailov, Vikrant Singhal, Rachel Cummings, Nicolas Papernot

http://arxiv.org/abs/2506.12553

Differential privacy (DP) is obtained by randomizing a data analysis
algorithm, which necessarily introduces a tradeoff between its utility and
privacy. Many DP mechanisms are built upon one of two underlying tools: Laplace
and Gaussian additive noise mechanisms. We expand the search space of
algorithms by investigating the Generalized Gaussian mechanism, which samples
the additive noise term $x$ with probability proportional to $e^{-\frac{| x
|}{\sigma}^{\beta} }$ for some $\beta \geq 1$. The Laplace and Gaussian
mechanisms are special cases of GG for $\beta=1$ and $\beta=2$, respectively.
  In this work, we prove that all members of the GG family satisfy differential
privacy, and provide an extension of an existing numerical accountant (the PRV
accountant) for these mechanisms. We show that privacy accounting for the GG
Mechanism and its variants is dimension independent, which substantially
improves computational costs of privacy accounting.
  We apply the GG mechanism to two canonical tools for private machine
learning, PATE and DP-SGD; we show empirically that $\beta$ has a weak
relationship with test-accuracy, and that generally $\beta=2$ (Gaussian) is
nearly optimal. This provides justification for the widespread adoption of the
Gaussian mechanism in DP learning, and can be interpreted as a negative result,
that optimizing over $\beta$ does not lead to meaningful improvements in
performance.
Reposted by Tudor Cebere
jelaninelson.bsky.social
Postdoc opportunity: if interested in a postdoc related to sketching starting Summer/Fall'25, especially applied to more efficient foundation model architectures (e.g. faster approx attention), please follow the instructions on the left column of theory.cs.berkeley.edu/postdoc.html by Jan 31