资源论文Refine or Represent: Residual Networks with Explicit Channel-wise Configuration

Refine or Represent: Residual Networks with Explicit Channel-wise Configuration

2019-11-07 | |  63 |   55 |   0
Abstract The successes of deep residual learning are mainly based on one key insight: instead of learning a completely new representation y = H(x), it is much easier to learn and optimize its residual mapping F (x) = H(x) ? x, as F (x) could be generally closer to zero than the non-residual function H(x). In this paper, we further exploit this insight by explicitly configuring each feature channel with a fine-grained learning style. We define two types of channel-wise learning styles: Refine and Represent. A Refine channel is learnt via the residual function yi = Fi (x) + xi with a regularization term on the channel response ||Fi (x)||, aiming to refine the input feature channel xi of the layer. A Represent channel directly learns a new representation yi = Hi (x) without calculating the residual function with reference to xi . We apply random channel-wise configuration to each residual learning block. Experimental results on the CIFAR10, CIFAR100 and ImageNet datasets demonstrate that our proposed method can substantially improve the performance of conventional residual networks including ResNet, ResNeXt and SENet.

上一篇:Online Deep Learning: Learning Deep Neural Networks on the Fly

下一篇:Deep into Hypersphere: Robust and Unsupervised Anomaly Discovery in Dynamic Networks

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...