Towards a Mathematical Understanding of the Difficulty in Learning with
Feedforward Neural Networks
Abstract
Training deep neural networks for solving machine
learning problems is one great challenge in the field, mainly
due to its associated optimisation problem being highly
non-convex. Recent developments have suggested that
many training algorithms do not suffer from undesired local minima under certain scenario, and consequently led
to great efforts in pursuing mathematical explanations for
such observations. This work provides an alternative mathematical understanding of the challenge from a smooth optimisation perspective. By assuming exact learning of finite
samples, sufficient conditions are identified via a critical
point analysis to ensure any local minimum to be globally
minimal as well. Furthermore, a state of the art algorithm,
known as the Generalised Gauss-Newton (GGN) algorithm,
is rigorously revisited as an approximate Newton’s algorithm, which shares the property of being locally quadratically convergent to a global minimum under the condition
of exact learning