资源论文Surfing: Iterative Optimization Over Incrementally Trained Deep Networks

Surfing: Iterative Optimization Over Incrementally Trained Deep Networks

2020-02-23 | |  49 |   44 |   0

Abstract

We investigate a sequential optimization procedure to minimize the empirical risk functional 图片.png for certain families of deep networks 图片.png The approach is to optimize a sequence of objective functions that use network parameters obtained during different stages of the training process. When initialized with random parameters θ0 , we show that the objective 图片.png is “nice” and easy to optimize with gradient descent. As learning is carried out, we obtain a sequence of generative networks 图片.png and associated risk functions 图片.png where t indicates a stage of stochastic gradient descent during training. Since the parameters of the network do not change by very much in each step, the surface evolves slowly and can be incrementally optimized. The algorithm is formalized and analyzed for a family of expansive networks. We call the procedure surfing since it rides along the peak of the evolving (negative) empirical risk function, starting from a smooth surface at the beginning of learning and ending with a wavy nonconvex surface after learning is complete. Experiments show how surfing can be used to find the global optimum and for compressed sensing even when direct gradient descent on the final learned network fails.

上一篇:Region-specific Diffeomorphic Metric Mapping

下一篇:Regret Minimization for Reinforcement Learning with Vectorial Feedback and Complex Objectives

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...