Abstract
Deep learning methods achieve great success recently on
many computer vision problems. In spite of these practical
successes, optimization of deep networks remains an active
topic in deep learning research. In this work, we focus on
investigation of the network solution properties that can potentially lead to good performance. Our research is inspired
by theoretical and empirical results that use orthogonal matrices to initialize networks, but we are interested in investigating how orthogonal weight matrices perform when network training converges. To this end, we propose to constrain the solutions of weight matrices in the orthogonal
feasible set during the whole process of network training,
and achieve this by a simple yet effective method called Singular Value Bounding (SVB). In SVB, all singular values
of each weight matrix are simply bounded in a narrow band
around the value of 1. Based on the same motivation, we also propose Bounded Batch Normalization (BBN), which improves Batch Normalization by removing its potential risk of
ill-conditioned layer transform. We present both theoretical
and empirical results to justify our proposed methods. Experiments on benchmark image classification datasets show
the efficacy of our proposed SVB and BBN. In particular, we
achieve the state-of-the-art results of 3.06% error rate on
CIFAR10 and 16.90% on CIFAR100, using off-the-shelf network architectures (Wide ResNets). Our preliminary results
on ImageNet also show the promise in large-scale learning. We release the implementation code of our methods at
www.aperture-lab.net/research/svb