资源论文Second-Order Kernel Online Convex Optimization with Adaptive Sketching

Second-Order Kernel Online Convex Optimization with Adaptive Sketching

2020-03-09 | |  96 |   45 |   0

Abstract

Kernel online convex optimization (KOCO) is a framework combining the expressiveness of nonparametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only O(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal 图片.png regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve O(log(Det(K))) regret, which we show scales as 图片.png where deff is the effective dimension of the problem and is usually much smaller than 图片.png The main drawback of second-order methods is their much higher 图片.png space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves 图片.png regret. To address th computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix Kt , and show that for a chosen parameter 图片.png our Sketched-KONS reduces the space and time complexity by a factor of 图片.png to 图片.png space and time per iterat while incurring only 1/γ times more regret.

上一篇:Spectral Learning from a Single Trajectory under Finite-State Policies

下一篇:Co-clustering through Optimal Transport

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...