资源论文Fast Multi-Stage Submodular Maximization

Fast Multi-Stage Submodular Maximization

2020-03-03 | |  88 |   37 |   0

Abstract

Motivated by extremely large-scale machine learning problems, we introduce a new multistage algorithmic framework for submodular maximization (called M ULT G REED), where at each stage we apply an approximate greedy procedure to maximize surrogate submodular functions. The surrogates serve as proxies for a target submodular function but require less memory and are easy to evaluate. We theoretically analyze the performance guarantee of the multi-stage framework and give examples on how to design instances of M ULT G REED for a broad range of natural submodular functions. We show that M ULT G REED performs very closely to the standard greedy algorithm given appropriate surrogate functions and argue how our framework can easily be integrated with distributive algorithms for further optimization. We complement our theory by empirically evaluating on several real-world problems, including data subset selection on millions of speech samples where M ULT G REED yields at least a thousand times speedup and superior results over the state-of-the-art selection methods

上一篇:Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem

下一篇:Stochastic Gradient Hamiltonian Monte Carlo

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...