资源论文On the Universality of Online Mirror Descent

On the Universality of Online Mirror Descent

2020-01-08 | |  72 |   43 |   0

Abstract

We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee.1 IntroductionMirror Descent is a first-order optimization procedure which generalizes the classic Gradient Descent procedure tonon-Euclidean geometries by relying on a “distance generating function” specific to the geometry (the squared 图片.png -norm in the case of standard Gradient Descent) [14, 4]. Mirror Descent is also applicable, and has been analyzed,in a stochastic optimization setting [9] and in an online setting, where it can ensure bounded online regret [20]. Infact, many classical online learning algorithms can be viewed as instantiations or variants of Online Mirror Descent,generally either with the Euclidean geometry (e.g. the Perceptron algorithm [5] and Online Gradient Descent [27]), orin the simplex (图片.png geometry), using an entropic distance generating function (Winnow [13] and Multiplicative Weights/ Online Exponentiated Gradient algorithm [11]). More recently, the Online Mirror Descent framework has beenapplied, with appropriate distance generating functions derived for a variety of new learning problems like multi-tasklearning and other matrix learning problems [10], online PCA [26] etc.

上一篇:Active dendrites: adaptation to spike-based communication

下一篇:An Exact Algorithm for F-Measure Maximization

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...