Abstract
Deep learning is formulated as a discrete-time optimal control problem. This allows one to characterize necessary conditions for optimality and develop training algorithms that do not rely on g dients with respect to the trainable parameters. particular, we introduce the discrete-time method of successive approximations (MSA), which is based on the Pontryagin’s maximum principle, for training neural networks. A rigorous error es timate for the discrete MSA is obtained, which sheds light on its dynamics and the means to stabilize the algorithm. The developed methods are applied to train, in a rather principled way, neu networks with weights that are constrained to tak values in a discrete set. We obtain competitive p formance and interestingly, very sparse weights i the case of ternary networks, which may be useful in model deployment in low-memory devices.