资源论文LARGE BATCH OPTIMIZATION FOR DEEP LEARNING :T RAINING BERT IN 76 MINUTES

LARGE BATCH OPTIMIZATION FOR DEEP LEARNING :T RAINING BERT IN 76 MINUTES

2020-01-02 | |  44 |   34 |   0

Abstract

Training large deep neural networks on massive datasets is computationally very challenging. There has been recent surge in interest in using large batch stochastic optimization methods to tackle this issue. The most prominent algorithm in this line of research is L ARS, which by employing layerwise adaptive learning rates trains R ES N ET on ImageNet in a few minutes. However, L ARS performs poorly for attention models like B ERT, indicating that its performance gains are not consistent across tasks. In this paper, we first study a principled layerwise adaptation strategy to accelerate training of deep neural networks using large mini-batches. Using this strategy, we develop a new layerwise adaptive large batch optimization technique called L AMB; we then provide convergence analysis of L AMB as well as L ARS, showing convergence to a stationary point in general nonconvex settings. Our empirical results demonstrate the superior performance of L AMB across various tasks such as B ERT and R ES N ET-50 training with very little hyperparameter tuning. In particular, for B ERT training, our optimizer enables use of very large batch sizes of 32868 without any degradation of performance. By increasing the batch size to the memory limit of a TPUv3 Pod, B ERT training time can be reduced from 3 days to just 76 minutes (Table 1).

上一篇:RETHINKING SOFTMAX CROSS -E NTROPY LOSSFOR ADVERSARIAL ROBUSTNESS

下一篇:CURRICULUM LOSS :ROBUST LEARNING AND GEN -ERALIZATION AGAINST LABEL CORRUPTION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...