资源论文Fairness Without Demographics in Repeated Loss Minimization

Fairness Without Demographics in Repeated Loss Minimization

2020-03-19 | |  60 |   49 |   0

Abstract

Machine learning models (e.g., speech recognizers) are usually trained to minimize average loss, which results in representation disparity— minority groups (e.g., non-native speakers) contribute less to the training objective and thus ten to suffer higher loss. Worse, as model accuracy affects user retention, a minority group can shrink over time. In this paper, we first show that the st tus quo of empirical risk minimization (ERM) amplifies representation disparity over time, which can even make initially fair models unfair. To mitigate this, we develop an approach based on distributionally robust optimization (DRO), which minimizes the worst case risk over all distributions close to the empirical distribution. We prove that this approach controls the risk of the minorit group at each time step, in the spirit of Rawlsian distributive justice, while remaining oblivious to the identity of the groups. We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.

上一篇:The Limits of Maxing, Ranking, and Preference Learning

下一篇:Coded Sparse Matrix Multiplication

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...