资源论文A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

A Mean Field Theory of Quantized Deep Networks: The Quantization-Depth Trade-Off

2020-02-20 | |  66 |   39 |   0

Abstract

Reducing the precision of weights and activation functions in neural network training, with minimal impact on performance, is essential for the deployment of these models in resource-constrained environments. We apply mean field techniques to networks with quantized activations in order to evaluate the degree to which quantization degrades signal propagation at initialization. We derive initialization schemes which maximize signal propagation in such networks, and suggest why this is helpful for generalization. Building on these results, we obtain a closed form implicit equation for 图片.png the maximal trainable depth (and hence model capacity), given N , the number of quantization levels in the activation function. Solving this equation numerically, we obtain asymptotically: 图片.png

上一篇:Ultrametric Fitting by Gradient Descent

下一篇:Perceiving the arrow of time in autoregressive motion

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...