资源论文GRADIENT `1 REGULARIZATION FORQ UANTIZATION ROBUSTNESS

GRADIENT `1 REGULARIZATION FORQ UANTIZATION ROBUSTNESS

2020-01-02 | |  71 |   50 |   0

Abstract
We analyze the effect of quantizing weights and activations of neural networks on their loss and derive a simple regularization scheme that improves robustness against post-training quantization. By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized ondemand to different bit-widths as energy and memory requirements of the application change. Unlike quantization-aware training using the straight-through estimator that only targets a specific bit-width and requires access to training data and pipeline, our regularization-based method paves the way for “on the fly” posttraining quantization to various bit-widths. We show that by modeling quantization as a `∞ -bounded perturbation, the first-order term in the loss expansion can be regularized using the `1 -norm of gradients. We experimentally validate our method on different architectures on CIFAR-10 and ImageNet datasets and show that the regularization of a neural network using our method improves robustness against quantization noise.

上一篇:LOW-R ESOURCE KNOWLEDGE -G ROUNDEDD IALOGUE GENERATION

下一篇:ADVERSARIAL AUTOAUGMENT

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...