We contemplate the privateness amplification properties of a sampling scheme by which a person’s knowledge is utilized in ok steps chosen randomly and uniformly from a sequence (or set) of t steps. This sampling scheme has been not too long ago utilized within the context of differentially personal optimization (Chua et al., 2024a; Choquette-Choo et al., 2025) and communication-efficient high-dimensional personal aggregation (Asi et al., 2025), the place it was proven to have utility benefits over the usual Poisson sampling. Theoretical analyses of this sampling scheme (Feldman & Shenfeld, 2025; Dong et al., 2025) result in bounds which might be near these of Poisson sampling, but nonetheless have two important shortcomings. First, in lots of sensible settings, the ensuing privateness parameters should not tight as a result of approximation steps within the evaluation. Second, the computed parameters are both the hockey stick or Renyi divergence, each of which introduce overheads when utilized in privateness loss accounting.
On this work, we show that the privateness loss distribution (PLD) of random allocation utilized to any differentially personal algorithm may be computed effectively. When utilized to the Gaussian mechanism, our outcomes show that the privacy-utility trade-off for random allocation is at the very least nearly as good as that of Poisson subsampling. Particularly, random allocation is healthier suited to coaching through DP-SGD. To help these computations, our work develops new instruments for normal privateness loss accounting primarily based on a notion of PLD realization. This notion permits us to increase correct privateness loss accounting to subsampling which beforehand required handbook noise-mechanism-specific evaluation.
†The Hebrew College of Jerusalem
