资源论文Towards a Zero-One Law for Column Subset Selection

Towards a Zero-One Law for Column Subset Selection

2020-02-19 | |  58 |   47 |   0

Abstract

There are a number of approximation algorithms for NP-hard versions of low rank approximation, such as finding a rank-k matrix B minimizing the sum of absolute values of differences to a given n-by-n matrix A, 图片.png , or more generally finding a rank-k matrix B which minimizes the sum of p-th powers of absolute values of differences, 图片.png . Many of these algorithms are linear time columns subset selection algorithms, returning a subset of poly(k log n) columns whose cost is no more than a poly(k) factor larger than the cost of the best rank-k matrix. The above error measures are special cases of the following general entrywise low rank approximation problem: given an arbitrary function P g : 图片.png , find a rank-k matrix B which minimizes 图片.png. A natural question is which functions g admit efficient approximation algorithms? Indeed, this is a central question of recent work studying generalized low rank models. In this work we give approximation algorithms for every function g which is approximately monotone and satisfies an approximate triangle inequality, and we show both of these conditions are necessary. Further, our algorithm is efficient if the function g admits an efficient approximate regression algorithm. Our approximation algorithms handle functions which are not even scale-invariant, such as the Huber loss function, which we show have very different structural properties than 图片.pngp -norms, e.g., one can show the lack of scale-invariance  causes any column subset selection algorithm to provably require a 图片.png factor larger number of columns than 图片.pngp -norms; nevertheless we design the first efficient column subset selection algorithms for such error measures.

上一篇:Variational Bayesian Decision-making for Continuous Utilities

下一篇:DATA: Differentiable ArchiTecture Approximation

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...