资源算法VGG 4x without degradation

VGG 4x without degradation

2019-09-20 | |  97 |   0 |   0

An algorithm that effectively prune channels each layer, which could accelerate VGG-16 4x without degradation. Accepted as a poster at _ICCV 2017_.

latex<br/>@article{he2017channel,<br/> title={Channel Pruning for Accelerating Very Deep Neural Networks},<br/> author={He, Yihui and Zhang, Xiangyu and Sun, Jian},<br/> journal={arXiv preprint arXiv:1707.06168},<br/> year={2017}<br/>}<br/>
Models:
VGG-16 4x 10.1% top-5 error and 29.4% top-1 error on ILSVRC-2012.

[[PDF](https://arxiv.org/abs/1707.06168)] [[Github repo](https://github.com/yihui-he/channel-pruning)]

无链接

上一篇:Striving for Simplicity

下一篇:Using Ranking-CNN for Age Estimation

用户评价
全部评价

热门资源

  • DuReader_QANet_BiDAF

    Machine Reading Comprehension on DuReader Usin...

  • ETD_cataloguing_a...

    ETD catalouging project using allennlp

  • allennlp_extras

    allennlp_extras Some utilities build on top of...

  • allennlp-dureader

    An Apache 2.0 NLP research library, built on Py...

  • honk-honk-motherf...

    honk-honk-motherfucker