Abstract. In this paper we propose a novel decomposition method
based on filter group approximation, which can significantly reduce the
redundancy of deep convolutional neural networks (CNNs) while maintaining the majority of feature representation. Unlike other low-rank decomposition algorithms which operate on spatial or channel dimension
of filters, our proposed method mainly focuses on exploiting the filter
group structure for each layer. For several commonly used CNN models,
including VGG and ResNet, our method can reduce over 80% floatingpoint operations (FLOPs) with less accuracy drop than state-of-the-art
methods on various image classification datasets. Besides, experiments
demonstrate that our method is conducive to alleviating degeneracy of
the compressed network, which hurts the convergence and performance
of the network