资源论文ENABLING DEEP SPIKING NEURAL NETWORKSWITH HYBRID CONVERSION AND SPIKE TIMINGD EPENDENT BACKPROPAGATION

ENABLING DEEP SPIKING NEURAL NETWORKSWITH HYBRID CONVERSION AND SPIKE TIMINGD EPENDENT BACKPROPAGATION

2019-12-30 | |  142 |   49 |   0

Abstract

Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes) which can potentially lead to higher energy-efficiency in neuromorphic hardware implementations. Many works have shown that an SNN for inference can be formed by copying the weights from a trained Artificial Neural Network (ANN) and setting the firing threshold for each layer as the maximum input received in that layer. These type of converted SNNs require a large number of time-steps to achieve competitive accuracy which diminishes the energy savings. The number of time-steps can be reduced by training SNNs with spike-based backpropagation from scratch, but that is computationally expensive and slow. To address these challenges, we present a computationally-efficient training technique for deep SNNs. We propose a hybrid training methodology: 1) take a converted SNN and use its weights and thresholds as an initialization step for spike-based backpropagation, and 2) perform incremental spike-timing dependent backpropagation (STDB) on this carefully initialized network to obtain an SNN that converges within few epochs and requires fewer time-steps for input processing. STDB is performed with a novel surrogate gradient function defined using neuron’s spike time. The weight update is proportional to the difference in spike timing between the current time-step and the most recent time-step the neuron generated an output spike. The SNNs trained with our hybrid conversion-and-STDB training perform at 10X ? 25X fewer number of time-steps and achieve similar accuracy compared to purely converted SNNs. The proposed training methodology converges in less than 20 epochs of spike-based backpropagation for most standard image classification datasets, thereby greatly reducing the training complexity compared to training SNNs from scratch. We perform experiments on CIFAR-10, CIFAR-100 and ImageNet datasets for both VGG and ResNet architectures. We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time-steps, which is 10X faster compared to converted SNNs with similar accuracy.

上一篇:NEURAL TANGENTS :FAST AND EASY INFINITE NEURAL NETWORKS IN PYTHON

下一篇:ACLOSER LOOK AT THE APPROXIMATION CAPABILI -TIES OF NEURAL NETWORKS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...