资源论文ON THE VARIANCE OF THE ADAPTIVE LEARNINGR ATE AND BEYOND

ON THE VARIANCE OF THE ADAPTIVE LEARNINGR ATE AND BEYOND

2020-01-02 | |  60 |   48 |   0

Abstract

The learning rate warmup heuristic achieves remarkable success in stabilizing training, accelerating convergence and improving generalization for adaptive stochastic optimization algorithms like RMSprop and Adam. Pursuing the theory behind warmup, we identify a problem of the adaptive learning rate – its variance is problematically large in the early stage, and presume warmup works as a variance reduction technique. We provide both empirical and theoretical evidence to verify our hypothesis. We further propose Rectified Adam (RAdam), a novel variant of Adam, by introducing a term to rectify the variance 550 of the adaptive 500 learning rate. Experimental results on image classification, 450 language modeling, 400 Training perplexity and neural machine translation verify our intuition and demonstrate 350 the efficacy 300 and robustness of RAdam. 250 200 150 100 50 0

上一篇:EFFICIENT AND INFORMATION -PRESERVING FUTUREFRAME PREDICTION AND BEYOND

下一篇:TRANSFERRING OPTIMALITY ACROSS DATADISTRIBUTIONS VIA HOMOTOPY METHODS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...