资源论文SHIFTED AND SQUEEZED 8- BIT FLOATING POINT FOR -MAT FOR LOW-P RECISION TRAINING OF DEEP NEU -RAL NETWORKS

SHIFTED AND SQUEEZED 8- BIT FLOATING POINT FOR -MAT FOR LOW-P RECISION TRAINING OF DEEP NEU -RAL NETWORKS

2019-12-30 | |  68 |   37 |   0

Abstract

Training with larger number of parameters while keeping fast iterations is an increasingly adopted strategy and trend for developing better performing Deep Neural Network (DNN) models. This necessitates increased memory footprint and computational requirements for training. Here we introduce a novel methodology for training deep neural networks using 8-bit floating point (FP8) numbers. Reduced bit precision allows for a larger effective memory and increased computational speed. We name this method Shifted and Squeezed FP8 (S2FP8). We show that, unlike previous 8-bit precision training methods, the proposed method works out-of-the-box for representative models: ResNet-50, Transformer and NCF. The method can maintain model accuracy without requiring fine-tuning loss scaling parameters or keeping certain layers in single precision. We introduce two learnable statistics of the DNN tensors shifted and squeezed factors that are used to optimally adjust the range of the tensors in 8-bits, thus minimizing the loss in information due to quantization.

上一篇:ADDITIVE POWERS -OF -T WO QUANTIZATION :A NON -UNIFORM DISCRETIZATION FOR NEURAL NETWORKS

下一篇:AP ROBABILISTIC FORMULATION OFU NSUPERVISED TEXT STYLE TRANSFER

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...