Abstract
Deep convolutional neural network (ConvNet) is a
promising approach for high-performance image classifi-
cation. The behavior of ConvNet is analyzed mainly based
on the neuron activations, such as by visualizing them. In
this paper, in contrast to the activations, we focus on filters
which are main components of ConvNets. Through analyzing two types of filters at convolution and fully-connected
(FC) layers, respectively, on various pre-trained ConvNets,
we present the methods to efficiently reformulate the filters,
contributing to improving both memory size and classification performance of the ConvNets. They render the filter
bases formulated in a parameter-free form as well as the
efficient representation for the FC layer. The experimental
results on image classification show that the methods are
favorably applied to improve various ConvNets, including
ResNet, trained on ImageNet with exhibiting high transferability on the other datasets