资源论文Efficient Private Empirical Risk Minimization for High-dimensional Learning

Efficient Private Empirical Risk Minimization for High-dimensional Learning

2020-03-06 | |  76 |   42 |   0

Abstract

Dimensionality reduction is a popular approach for dealing with high dimensional data that leads to substantial computational savings. Random projections are a simple and effective method for universal dimensionality reduction with rigorous theoretical guarantees. In this paper, we theoretically study the problem of differentially private empirical risk minimization in the projected subspace (compressed domain). We ask: is it possible to design differentially private algorithms with small excess risk given access to only projected data? In this paper, we answer this question in affirmative, by showing that for the class of generalized linear functions, given only the projected data and the projection matrix, we can obtain excess risk bounds of 图片.png under ε-differential privacy, and 图片.png under (ε,δ)-differential privacy, where n is the sample size and w(C) is the Gaussian width of the parameter space C that we optimize over. A simple consequence of these results is that, for a large class of ERM problems, in the traditional setting (i.e., with access to the original data), un der -differential privacy, we improve the worstcase risk bounds of (Bassily et al., 2014).

上一篇:Fast K-Means with Accurate Bounds

下一篇:Learning to Generate with Memory

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...