资源论文Stochastic and Adversarial Online Learning without Hyperparameters

Stochastic and Adversarial Online Learning without Hyperparameters

2020-02-10 | |  75 |   43 |   0

Abstract 

Most online optimization algorithms focus on one of two things: performing well in adversarial settings by adapting to unknown data parameters (such as Lipschitz constants), typically achieving image.png regret, or performing well in stochastic settings where they can leverage some structure in the losses (such as strong convexity), typically achieving image.png regret. Algorithms that focus on the former problem hitherto achieved image.png in the stochastic setting rather than image.png. Here we introduce an online optimization algorithm that achieves image.png regret in a wide class of stochastic settings while gracefully degrading to the optimal image.png regret in adversarial settings (up to logarithmic factors). Our algorithm does not require any prior knowledge about the data or tuning of parameters to achieve superior performance.

上一篇:Runtime Neural Pruning

下一篇:Hypothesis Transfer Learning via Transformation Functions

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...