资源论文COMPARING FINE -TUNING AND REWINDING INN EURAL NETWORK PRUNING

COMPARING FINE -TUNING AND REWINDING INN EURAL NETWORK PRUNING

2020-01-02 | |  57 |   55 |   0

Abstract

Neural network pruning is a popular technique for reducing inference costs by removing connections, neurons, or other structure from the network. In the literature, pruning typically follows a standard procedure: train the network, remove unwanted structure (pruning), and train the resulting network further to recover accuracy (fine-tuning). In this paper, we explore an alternative to fine-tuning: rewinding. Rather than continuing to train the resultant pruned network (fine-tuning), rewind the remaining weights to their values from earlier in training, and re-train the resultant network for the remainder of the original training process. We find that this procedure, which repurposes the strategy for finding lottery tickets presented by Frankle et al. (2019), makes it possible to prune networks further than is possible with finetuning for a given target accuracy, provided that the weights are rewound to a suitable point in training. We also find that there is a wide range of suitable rewind points that achieve higher accuracy than fine-tuning across all tested networks. Based on these results, we argue that practitioners should explore rewinding as an alternative to fine-tuning for neural network pruning.

上一篇:DOUBLE NEURAL COUNTERFACTUAL REGRET MINI-MIZATION

下一篇:STOCHASTIC CONDITIONAL GENERATIVE NETWORKSWITH BASIS DECOMPOSITION

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...