资源论文Blind Justice: Fairness with Encrypted Sensitive Attributes

Blind Justice: Fairness with Encrypted Sensitive Attributes

2020-03-16 | |  62 |   35 |   0

Abstract

Recent work has explored how to train machine learning models which do not discriminate against any subgroup of the population as determined by sensitive attributes such as gender or race. To avoid disparate treatment, sensitive attributes should not be considered. On the other hand, in order to avoid disparate impact, sensitive attributes must be examined—e.g., in order to learn a fair model, or to check if a given model is fair. We introduce methods from secure multi-party computation which allow us to avoid both. By encrypting sensitive attributes, we show how an outcomebased fair model may be learned, checked, or have its outputs verified and held to account, without users revealing their sensitive attributes.

上一篇:K-means clustering using random matrix sparsification

下一篇:x2 Generative Adversarial Network

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...