资源论文Delayed Impact of Fair Machine Learning

Delayed Impact of Fair Machine Learning

2020-03-11 | |  54 |   41 |   0

Abstract

Fairness in machine learning has predominantly been studied in static classification settings wit out concern for how decisions change the underlying population over time. Conventional wisdom suggests that fairness criteria promote the longterm well-being of those groups they aim to protect. We study how static fairness criteria intera with temporal indicators of well-being, such as long-term improvement, stagnation, and decline in a variable of interest. We demonstrate that eve in a one-step feedback model, common fairness criteria in general do not promote improvement over time, and may in fact cause harm in cases where an unconstrained objective would not. We completely characterize the delayed impact of three standard criteria, contrasting the regimes i which these exhibit qualitatively different behavior. In addition, we find that a natural form of measurement error broadens the regime in which fairness criteria perform favorably. Our results highlight the importance of measurement and temporal modeling in the evaluation of fairness criteria, suggesting a range of new challenges and trade-offs.

上一篇:Learning unknown ODE models with Gaussian processes

下一篇:Let’s be Honest: An Optimal No-Regret Framework for Zero-Sum Games

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...