资源算法VGG 4x without degradation

VGG 4x without degradation

2019-09-20 | |  68 |   0 |   0

An algorithm that effectively prune channels each layer, which could accelerate VGG-16 4x without degradation. Accepted as a poster at _ICCV 2017_.

latex<br/>@article{he2017channel,<br/> title={Channel Pruning for Accelerating Very Deep Neural Networks},<br/> author={He, Yihui and Zhang, Xiangyu and Sun, Jian},<br/> journal={arXiv preprint arXiv:1707.06168},<br/> year={2017}<br/>}<br/>
Models:
VGG-16 4x 10.1% top-5 error and 29.4% top-1 error on ILSVRC-2012.

[[PDF](https://arxiv.org/abs/1707.06168)] [[Github repo](https://github.com/yihui-he/channel-pruning)]

无链接

上一篇:Striving for Simplicity

下一篇:Using Ranking-CNN for Age Estimation

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...