Abstract
We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee.1 IntroductionMirror Descent is a first-order optimization procedure which generalizes the classic Gradient Descent procedure tonon-Euclidean geometries by relying on a “distance generating function” specific to the geometry (the squared -norm in the case of standard Gradient Descent) [14, 4]. Mirror Descent is also applicable, and has been analyzed,in a stochastic optimization setting [9] and in an online setting, where it can ensure bounded online regret [20]. Infact, many classical online learning algorithms can be viewed as instantiations or variants of Online Mirror Descent,generally either with the Euclidean geometry (e.g. the Perceptron algorithm [5] and Online Gradient Descent [27]), orin the simplex ( geometry), using an entropic distance generating function (Winnow [13] and Multiplicative Weights/ Online Exponentiated Gradient algorithm [11]). More recently, the Online Mirror Descent framework has beenapplied, with appropriate distance generating functions derived for a variety of new learning problems like multi-tasklearning and other matrix learning problems [10], online PCA [26] etc.