资源论文SCALABLE MODEL COMPRESSION BYE NTROPY PENALIZED REPARAMETERIZATION

SCALABLE MODEL COMPRESSION BYE NTROPY PENALIZED REPARAMETERIZATION

2020-01-02 | |  59 |   41 |   0

Abstract

We describe a simple and general neural network weight compression approach, in which the network parameters (weights and biases) are represented in a “latent” space, amounting to a reparameterization. This space is equipped with a learned probability model, which is used to impose an entropy penalty on the parameter representation during training, and to compress the representation using a simple arithmetic coder after training. Classification accuracy and model compressibility is maximized jointly, with the bitrate–accuracy trade-off specified by a hyperparameter. We evaluate the method on the MNIST, CIFAR-10 and ImageNet classification benchmarks using six distinct model architectures. Our results show that state-of-the-art model compression can be achieved in a scalable and general way without requiring complex procedures such as multi-stage training.

上一篇:NAS-B ENCH -102: EXTENDING THE SCOPE OF RE -PRODUCIBLE NEURAL ARCHITECTURE SEARCH

下一篇:EMERGENT TOOL USE FROM MULTI -AGENT AUTOCURRICULA

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...