资源算法PyramidNet-caffe

PyramidNet-caffe

2020-02-10 | |  35 |   0 |   0

PyramidNet-caffe

Caffe implementation of the paper "Deep Pyramidal Residual Networks" (https://arxiv.org/abs/1610.02915).

This repository contains the code for the paper:

Dongyoon Han*, Jiwhan Kim*, and Junmo Kim, "Deep Pyramidal Residual Networks", CVPR 2017 (* equal contribution).

Abstract

Deep convolutional neural networks (DCNNs) have shown remarkable performance in image classification tasks in recent years. Generally, deep neural network architectures are stacks consisting of a large number of convolution layers, and they perform downsampling along the spatial dimension via pooling to reduce memory usage. At the same time, the feature map dimension (i.e., the number of channels) is sharply increased at downsampling locations, which is essential to ensure effective performance because it increases the capability of high-level attributes. Moreover, this also applies to residual networks and is very closely related to their performance. In this research, instead of using downsampling to achieve a sharp increase at each residual unit, we gradually increase the feature map dimension at all the units to involve as many locations as possible. This is discussed in depth together with our new insights as it has proven to be an effective design to improve the generalization ability. Furthermore, we propose a novel residual unit capable of further improving the classification accuracy with our new network architecture. Experiments on benchmark CIFAR datasets have shown that our network architecture has a superior generalization ability compared to the original residual networks.

图片.png

Figure 1: Schematic illustration of (a) basic residual units, (b) bottleneck, (c) wide residual units, and (d) our pyramidal residual units.

图片.png

Figure 2: Visual illustrations of (a) additive PyramidNet, (b) multiplicative PyramidNet, and (c) comparison of (a) and (b).

Results

ImageNet

Top-1 and Top-5 error rates of single-model, single-crop (224*224) on ImageNet dataset. We use the additive PyramidNet for our results.

Network# of parametersOutput feat. dimensionTop-1 errorTop-5 error
ResNet-10144.5M204823.67.1
PyramidNet-101, alpha=25023.5M125623.246.59
ResNet-15260.0M204823.06.7
PyramidNet-152, alpha=20026.0M105622.446.14
PyramidNet-200, alpha=30062.1M145620.415.16
PyramidNet-200, alpha=450, Dropout (0.5)116.4M205620.275.49

Model files download: link

Notes

  1. The ImageNet results are obtained using the uploaded codes.

  2. When testing with our model, please do not forget to use a scale factor (0.017352).

Contact

Jiwhan Kim (jhkim89@kaist.ac.kr), Dongyoon Han (dyhan@kaist.ac.kr), Junmo Kim (junmo.kim@kaist.ac.kr)


上一篇:Keras-FewShotLearning

下一篇:chainer-PyramidNet

用户评价
全部评价

热门资源

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • shih-styletransfer

    shih-styletransfer Code from Style Transfer ...