资源论文Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

Is Input Sparsity Time Possible for Kernel Low-Rank Approximation?

2020-02-10 | |  63 |   44 |   0

Abstract 

Low-rank approximation is a common tool used to accelerate kernel methods: the n x n kernel matrix K is approximated via a rank-k matrix image.png which can be stored in much less space and processed more quickly. In this work we study the limits of computationally efficient low-rank kernel approximation. We show that for a broad class of kernels, including the popular Gaussian and polynomial kernels, computing a relative error k-rank approximation to K is at least as difficult as multiplying the input data matrix image.png by an arbitrary matrix image.png Barring a breakthrough in fast matrix multiplication, when k is not too large, this requires image.png time where nnz(A) is the number of non-zeros in A. This lower bound matches, in many parameter regimes, recent work on subquadratic time algorithms for low-rank approximation of general kernels [MM16, MW17], demonstrating that these algorithms are unlikely to be significantly improved, in particular to O(nnz(A)) input sparsity runtimes. At the same time there is hope: we show for the first time that O(nnz(A)) time approximation is possible for general radial basis function kernels (e.g., the Gaussian kernel) for the closely related problem of low-rank approximation of the kernelized dataset.

上一篇:Robust Optimization for Non-Convex Objectives

下一篇:GP CaKe: Effective brain connectivity with causal kernels

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...