[ad_1]
We study differentially private (DP) optimization algorithms for stochastic and empirical objectives which are neither smooth nor convex, and propose methods that return a Goldstein-stationary point with sample complexity bounds that improve on existing works.
We start by providing a single-pass (ϵ,δ)(\epsilon,\delta)(ϵ,δ)-DP algorithm that returns an (α,β)(\alpha,\beta)(α,β)-stationary point as long as the dataset is of size…
[ad_2]
Source link