资源论文Multi-objective Maximization of Monotone Submodular Functions with Cardinality Constraint

Multi-objective Maximization of Monotone Submodular Functions with Cardinality Constraint

2020-02-13 | |  80 |   40 |   0

Abstract 

We consider the problem of multi-objective maximization of monotone submodular functions subject to cardinality constraint, often formulated as maximage.png. While it is widely known that greedy methods work well for a single objective, the problem becomes much harder with multiple objectives. In fact, Krause et al. (2008) showed that when the number of objectives m grows as the cardinality k i.e., image.png, the problem is inapproximable (unless P = N P ). On the other hand, when m is constant Chekuri et al. (2010) showed a randomized( image.png) image.pngapproximation with runtime (number of queries to 3 function oracle) image.png . We focus on finding a fast and practical algorithm that has (asymptotic) approximation guarantees even when m is super constant. We first modify the algorithm of Chekuri et al. (2010) to achieve a image.png approximation for image.png. This demonstrates a steep transition from constant factor approximability to inapproximability around m = image.png(k). Then using Multiplicative-Weight-Updates (MWU), we find a much faster image.png time asymptotic image.png approximation. While the above results are all randomized, we also give a simple deterministic (image.png approximation with  runtime image.png . Finally, we run synthetic experiments using Kronecker graphs and find that our MWU inspired heuristic outperforms existing heuristics.

上一篇:Reducing Network Agnostophobia

下一篇:Library Learning for Neurally-Guided Bayesian Program Induction

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...