资源论文Capacity Bounded Differential Privacy

Capacity Bounded Differential Privacy

2020-02-23 | |  53 |   44 |   0

Abstract

Differential privacy has emerged as the gold standard for measuring the risk posed by an algorithm’s output to the privacy of a single individual in a dataset. It is defined as the worst-case distance between the output distributions of an algorithm that is run on inputs that differ by a single person. In this work, we present a novel relaxation of differential privacy, capacity bounded differential privacy, where the adversary that distinguishes the output distributions is assumed to be capacitybounded – i.e. bounded not in computational power, but in terms of the function class from which their attack algorithm is drawn. We model adversaries of this form using restricted f -divergences between probability distributions, and study properties of the definition and algorithms that satisfy them. Our results demonstrate that these definitions possess a number of interesting properties enjoyed by differential privacy and some of its existing relaxations; additionally, common mechanisms such as the Laplace and Gaussian mechanisms enjoy better privacy guarantees for the same added noise under these definitions.

上一篇:Fast structure learning with modular regularization

下一篇:REM: From Structural Entropy To Community Structure Deception

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...