Abstract. Deep convolutional neural networks have liberated its extraordinary
power on various tasks. However, it is still very challenging to deploy stateof-the-art models into real-world applications due to their high computational
complexity. How can we design a compact and effective network without massive experiments and expert knowledge? In this paper, we propose a simple and
effective framework to learn and prune deep models in an end-to-end manner.
In our framework, a new type of parameter – scaling factor is first introduced
to scale the outputs of specific structures, such as neurons, groups or residual
blocks. Then we add sparsity regularizations on these factors, and solve this optimization problem by a modified stochastic Accelerated Proximal Gradient (APG)
method. By forcing some of the factors to zero, we can safely remove the corresponding structures, thus prune the unimportant parts of a CNN. Comparing with
other structure selection methods that may need thousands of trials or iterative
fine-tuning, our method is trained fully end-to-end in one training pass without
bells and whistles. We evaluate our method, Sparse Structure Selection with several state-of-the-art CNNs, and demonstrate very promising results with adaptive
depth and width selection. Code is available at: https://github.com/huangzehao/
sparse-structure-selection.