资源论文Dimension-Free Exponentiated Gradient

Dimension-Free Exponentiated Gradient

2020-01-16 | |  51 |   34 |   0

Abstract

I present a new online learning algorithm that extends the exponentiated gradient framework to infinite dimensional spaces. My analysis shows that the algorithm is implicitly able to estimate the L2 norm of the unknown competitor, U , achieving a regret bound  of the order of 图片.png instead of the standard 图片.png achievable without knowing U . For this analysis, I introduce novel tools for algorithms with time-varying regularizers, through the use of local smoothness. p Through a lower bound, I also show that the algorithm is optimal up to 图片.png term for linear and Lipschitz losses.

上一篇:Estimating the Unseen: Improved Estimators for Entropy and other Properties

下一篇:q-OCSVM: A q-Quantile Estimator for High-Dimensional Distributions

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...