资源论文STOCHASTIC WEIGHT AVERAGING IN PARALLEL :L ARGE -BATCH TRAINING THAT GENERALIZES WELL

STOCHASTIC WEIGHT AVERAGING IN PARALLEL :L ARGE -BATCH TRAINING THAT GENERALIZES WELL

2020-01-02 | |  56 |   39 |   0

Abstract

We propose Stochastic Weight Averaging in Parallel (SWAP), an algorithm to accelerate DNN training. Our algorithm uses large mini-batches to compute an approximate solution quickly and then refines it by averaging the weights of multiple models computed independently and in parallel. The resulting models generalize equally well as those trained with small mini-batches but are produced in a substantially shorter time. We demonstrate the reduction in training time and the good generalization performance of the resulting models on the computer vision datasets CIFAR10, CIFAR100, and ImageNet.

上一篇:QUANTIFYING THE COST OF RELIABLE PHOTO AU -THENTICATION VIA HIGH -P ERFORMANCE LEARNEDL OSSY REPRESENTATIONS

下一篇:NESTEROV ACCELERATED GRADIENT AND SCALEI NVARIANCE FOR ADVERSARIAL ATTACKS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...