Abstract
Kernel online convex optimization (KOCO) is a framework combining the expressiveness of nonparametric kernel models with the regret guarantees of online learning. First-order KOCO methods such as functional gradient descent require only O(t) time and space per iteration, and, when the only information on the losses is their convexity, achieve a minimax optimal regret. Nonetheless, many common losses in kernel problems, such as squared loss, logistic loss, and squared hinge loss posses stronger curvature that can be exploited. In this case, second-order KOCO methods achieve O(log(Det(K))) regret, which we show scales as where deff is the effective dimension of the problem and is usually much smaller than The main drawback of second-order methods is their much higher space and time complexity. In this paper, we introduce kernel online Newton step (KONS), a new second-order KOCO method that also achieves regret. To address th computational complexity of second-order methods, we introduce a new matrix sketching algorithm for the kernel matrix Kt , and show that for a chosen parameter our Sketched-KONS reduces the space and time complexity by a factor of to space and time per iterat while incurring only 1/γ times more regret.