资源论文TRAINING INDIVIDUALLY FAIR ML MODELS WITHSENSITIVE SUBSPACE ROBUSTNESS

TRAINING INDIVIDUALLY FAIR ML MODELS WITHSENSITIVE SUBSPACE ROBUSTNESS

2020-01-02 | |  59 |   42 |   0

Abstract

We propose an approach to training machine learning models that are fair in the sense that their performance is invariant under certain perturbations to the features. For example, the performance of a resume screening system should be invariant under changes to the name of the applicant. We formalize this intuitive notion of fairness by connecting it to the original notion of individual fairness put forth by Dwork et al and show that the proposed approach achieves this notion of fairness. We also demonstrate the effectiveness of the approach on two machine learning tasks that are susceptible to gender and racial biases.

上一篇:RELATIONAL STATE -S PACE MODELFOR STOCHASTIC MULTI -O BJECT SYSTEMS

下一篇:UNDERSTANDING AND IMPROVING INFORMATIONT RANSFER IN MULTI -TASK LEARNING

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...