资源论文Accelerating Natural Gradient with Higher-Order Invariance

Accelerating Natural Gradient with Higher-Order Invariance

2020-03-20 | |  57 |   40 |   0

Abstract

An appealing property of the natural gradient is that it is invariant to arbitrary differentiable r rameterizations of the model. However, this invariance property requires infinitesimal steps and is lost in practical implementations with small bu finite step sizes. In this paper, we study invariance properties from a combined perspective of Riemannian geometry and numerical differential equation solving. We define the order of invariance of a numerical method to be its convergence order to an invariant solution. We propose to use higher-order integrators and geodesic corrections to obtain more invariant optimization trajectories We prove the numerical convergence properties of geodesic corrected updates and show that they can be as computational efficient as plain natural gradient. Experimentally, we demonstrate that invariance leads to faster optimization and our techniques improve on traditional natural gradient in deep neural network training and natural policy gradient for reinforcement learning.

上一篇:Learning Steady-States of Iterative Algorithms over Graphs

下一篇:Spectrally Approximating Large Graphs with Smaller Graphs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...