Abstract
Deep neural networks have demonstrated their power in
many computer vision applications. State-of-the-art deep
architectures such as VGG, ResNet, and DenseNet are
mostly optimized by the SGD-Momentum algorithm, which
updates the weights by considering their past and current gradients. Nonetheless, SGD-Momentum suffers from
the overshoot problem, which hinders the convergence of
network training. Inspired by the prominent success of
proportional-integral-derivative (PID) controller in automatic control, we propose a PID approach for accelerating deep network optimization. We first reveal the intrinsic
connections between SGD-Momentum and PID based controller, then present the optimization algorithm which exploits the past, current, and change of gradients to update
the network parameters. The proposed PID method reduces
much the overshoot phenomena of SGD-Momentum, and it
achieves up to 50% acceleration on popular deep network
architectures with competitive accuracy, as verified by our
experiments on the benchmark datasets including CIFAR10,
CIFAR100, and Tiny-ImageNet