资源论文Fast Convergence of Regularized Learning in Games

Fast Convergence of Regularized Learning in Games

2020-02-04 | |  55 |   49 |   0

Abstract 

We show that natural classes of regularized learning algorithms with a form of recency bias achieve faster convergence rates to approximate efficiency and to coarse correlated equilibria in multiplayer normal form games. When each player in a game uses an algorithm from our class, their individual regret decays at Oimage.png, while the sum of utilities converges to an approximate optimum at image.png–an improvement upon the worst case image.png rates. We show a blackbox reduction for any algorithm in the class to achieve image.png rates against an adversary, while maintaining the faster rates against algorithms in the class. Our results extend those of Rakhlin and Shridharan [17] and Daskalakis et al. [4], who only analyzed two-player zero-sum games for specific algorithms.

上一篇:Learning Large-Scale Poisson DAG Models based on OverDispersion Scoring

下一篇:Differentially Private Learning of Structured Discrete Distributions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...