资源论文DYNAMIC MODEL PRUNING WITH FEEDBACK

DYNAMIC MODEL PRUNING WITH FEEDBACK

2020-01-02 | |  144 |   58 |   0

Abstract

Deep neural networks often have millions of parameters. This can hinder their deployment to low-end devices, not only due to high memory requirements but also because of increased latency at inference. We propose a novel model compression method that generates a sparse trained model without additional overhead: by allowing (i) dynamic allocation of the sparsity pattern and (ii) incorporating feedback signal to reactivate prematurely pruned weights we obtain a performant sparse model in one single training pass (retraining is not needed, but can further improve the performance). We evaluate our method on CIFAR-10 and ImageNet, and show that the obtained sparse models can reach the state-of-the-art performance of dense models. Moreover, their performance surpasses that of models generated by all previously proposed pruning schemes.

上一篇:HIERARCHICAL FORESIGHT:S ELF -S UPERVISED LEARNING OF LONG -H ORIZONTASKS VIA VISUAL SUBGOAL GENERATION

下一篇:RIDE: REWARDING IMPACT-D RIVEN EXPLORATIONFOR PROCEDURALLY-G ENERATED ENVIRONMENTS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...