资源论文Learning Submodular Losses with the Lov´asz Hinge

Learning Submodular Losses with the Lov´asz Hinge

2020-03-04 | |  69 |   40 |   0

Abstract

Learning with non-modular losses is an important problem when sets of predictions are made simultaneously. The main tools for constructing convex surrogate loss functions for set prediction are margin rescaling and slack rescaling. In this work, we show that these strategies lead to tight convex surrogates iff the underlying loss function is increasing in the number of incorrect predictions. However, gradient or cutting-plane computation for these functions is NP-hard for nonsupermodular loss functions. We propose instead a novel convex surrogate loss function for submodular losses, the Lov´asz hinge, which leads to O(p log p) complexity with O(p) oracle accesses to the loss function to compute a gradient or cutting-plane. As a result, we have developed the first tractable convex surrogates in the litera ture for submodular losses. We demonstrate the utility of this novel convex surrogate through a real world image labeling task.

上一篇:Distributional Rank Aggregation, and an Axiomatic Analysis

下一篇:Guaranteed Tensor Decomposition: A Moment Approach

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...