资源论文Deep Asymmetric Multi-task Feature Learning

Deep Asymmetric Multi-task Feature Learning

2020-03-19 | |  73 |   36 |   0

Abstract

We propose Deep Asymmetric Multitask Feature Learning (Deep-AMTFL) which can learn deep representations shared across multiple tasks while effectively preventing negative transfer that may happen in the feature sharing process. Specifically, we introduce an asymmetric autoencoder term that allows reliable predictors for the easy tasks to have high contribution to the feature lear ing while suppressing the influences of unreliable predictors for more difficult tasks. This allows the learning of less noisy representations, and enables unreliable predictors to exploit knowledge from the reliable predictors via the shared latent features. Such asymmetric knowledge transfer through shared features is also more scalable and efficient than inter-task asymmetric transfer. We validate our Deep-AMTFL model on multiple benchmark datasets for multitask learning and image classification, on which it significantly out performs existing symmetric and asymmetric multitask learning models, by effectively preventing negative transfer in deep feature learning.

上一篇:Large-Scale Sparse Inverse Covariance Estimation via Thresholding and Max-Det Matrix Completion

下一篇:Using Inherent Structures to design Lean 2-layer RBMs

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to learn...

    The move from hand-designed features to learned...