资源论文Private Stochastic Convex Optimization with Optimal Rates

Private Stochastic Convex Optimization with Optimal Rates

2020-02-20 | |  45 |   37 |   0

Abstract

We study differentially private (DP) algorithms for stochastic convex optimization (SCO). In this problem the goal is to approximately minimize the population loss given i.i.d. samples from a distribution over convex and Lipschitz loss functions. A long line of existing work on private convex optimization focuses on the empirical loss and derives asymptotically tight bounds on the excess empirical loss. However a significant gap exists in the known bounds for the population loss. We show that, up to logarithmic factors, the optimal excess population loss for DP algorithms is equal to the larger of the optimal non-private excess population loss, and the optimal excess empirical loss of DP algorithms. This implies that, contrary to intuition based on private ERM, private SCO has asymptotically the same rate of 图片.png as non-private SCO in the parameter regime most common in practice. The best previous result in this setting gives rate of 图片.png Our approach builds on existing differentially private algorithms and relies on the analysis of algorithmic stability to ensure generalization.

上一篇:No-Regret Learning in Unknown Games with Correlated Payoffs

下一篇:Towards modular and programmable architecture search

用户评价
全部评价

热门资源

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Rating-Boosted La...

    The performance of a recommendation system reli...

  • Hierarchical Task...

    We extend hierarchical task network planning wi...