资源论文Scalable Multitask Representation Learning for Scene Classification

Scalable Multitask Representation Learning for Scene Classification

2019-12-12 | |  68 |   43 |   0

Abstract

The underlying idea of multitask learning is that learning tasks jointly is better than learning each task individually. In particular, if only a few training examples are avail-able for each task, sharing a jointly trained representation improves classification performance. In this paper, we propose a novel multitask learning method that learns a lowdimensional representation jointly with the corresponding classifiers, which are then able to profit from the latent inter-class correlations. Our method scales with respect to the original feature dimension and can be used with highdimensional image descriptors such as the Fisher Vector. Furthermore, it consistently outperforms the current state of the art on the SUN397 scene classification benchmark with varying amounts of training data.

上一篇:Random Laplace Feature Maps for Semigroup Kernels on Histograms

下一篇:Two-View Camera Housing Parameters Calibration for Multi-Layer Flat Refractive Interface

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...