资源论文Weightless: Lossy weight encoding for deep neural network compression

Weightless: Lossy weight encoding for deep neural network compression

2020-03-20 | |  51 |   40 |   0

Abstract

The large memory requirements of deep neural networks limit their deployment and adoption on many devices. Model compression methods effectively reduce the memory requirements of these models, usually through applying transformations such as weight pruning or quantization. In this paper, we present a novel scheme for lossy weight encoding co-designed with weight simplification techniques. The encoding is based on the Bloomier filter, a probabilistic data structure that can save space at the cost of introducing random errors. Leveraging the ability of neural networks to tolerate these imperfections and by re-training around the errors, the proposed technique, named Weightless, can compress weights by up to 496?without loss of model accuracy. This results in up to a 1.51?improvement over the state-of-the-art.

上一篇:Equivalence of Multicategory SVM and Simplex Cone SVM: Fast Computations and Statistical Theory

下一篇:Improved nearest neighbor search using auxiliary information and priority functions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...