Abstract
In this paper, we consider efficient differentially private empirical risk minimization from the viewpoint
of optimization algorithms. For strongly convex and
smooth objectives, we prove that gradient descent
with output perturbation not only achieves nearly
optimal utility, but also significantly improves the
running time of previous state-of-the-art private optimization algorithms, for both -DP and (, ?)-DP.
For non-convex but smooth objectives, we propose
an RRPSGD (Random Round Private Stochastic
Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee.
Besides the expected utility bounds, we also provide
guarantees in high probability form. Experiments
demonstrate that our algorithm consistently outperforms existing method in both utility and running
time