资源论文Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

Privacy for Free: Posterior Sampling and Stochastic Gradient Monte Carlo

2020-03-05 | |  56 |   45 |   0

Abstract

We consider the problem of Bayesian learning on sensitive datasets and present two simple but somewhat surprising results that connect Bayesian learning to “differential privacy”, a cryptographic approach to protect individuallevel privacy while permitting database-level utility. Specifically, we show that under standard assumptions, getting one sample from a posterior distribution is differentially private “for free”; and this sample as a statistical estimator is often consistent, near optimal, and computationally tractable. Similarly but separately, we show that a recent line of work that use stochastic gradient for Hybrid Monte Carlo (HMC) sampling also preserve differentially privacy with minor or no modifications of the algorithmic procedure at all, these observations lead to an “anytime” algorithm for Bayesian learning under privacy constraint. We demonstrate that it performs much better than the state-of-the-art differential priva methods on synthetic and real datasets.

上一篇:Un-regularizing: approximate proximal point and faster stochastic algorithms for empirical risk minimization

下一篇:A Nearly-Linear Time Framework for Graph-Structured Sparsity

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...