[ad_1]
We study the problem of differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where we assume a kthk^\textthkth-moment bound on the Lipschitz constants of sample functions, rather than a uniform bound. We propose a new reduction-based approach that enables us to obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error G2⋅1n+Gk⋅(dnε)1−1kG_2 \cdot \frac 1 \sqrt n + G_k \cdot (\frac\sqrt dn\varepsilon)^1 – \frac 1 kG2⋅n1+Gk⋅(nεd)1−k1 under (ε,δ)(\varepsilon, \delta)(ε,δ)-approximate…
[ad_2]
Source link
Home Machine Learning Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions