Abstract
Improving information flow in deep networks helps to
ease the training difficulties and utilize parameters more
efficiently. Here we propose a new convolutional neural network architecture with alternately updated clique
(CliqueNet). In contrast to prior networks, there are both
forward and backward connections between any two layers
in the same block. The layers are constructed as a loop and
are updated alternately. The CliqueNet has some unique
properties. For each layer, it is both the input and output of
any other layer in the same block, so that the information
flow among layers is maximized. During propagation, the
newly updated layers are concatenated to re-update previously updated layer, and parameters are reused for multiple times. This recurrent feedback structure is able to
bring higher level visual information back to refine lowlevel filters and achieve spatial attention. We analyze the
features generated at different stages and observe that using
refined features leads to a better result. We adopt a multiscale feature strategy that effectively avoids the progressive
growth of parameters. Experiments on image recognition
datasets including CIFAR-10, CIFAR-100, SVHN and ImageNet show that our proposed models achieve the state-ofthe-art performance with fewer parameters