资源论文Self-Paced Multitask Learning with Shared Knowledge

Self-Paced Multitask Learning with Shared Knowledge

2019-10-29 | |  76 |   36 |   0
Abstract This paper introduces self-paced task selection to multitask learning, where instances from more closely related tasks are selected in a progression of easier-to-harder tasks, to emulate an effective human education strategy, but applied to multitask machine learning. We develop the mathematical foundation for the approach based on iterative selection of the most appropriate task, learning the task parameters, and updating the shared knowledge, optimizing a new bi-convex loss function. This proposed method applies quite generally, including to multitask feature learning, multitask learning with alternating structure optimization, etc. Results show that in each of the above formulations self-paced (easierto-harder) task selection outperforms the baseline version of these methods in all the experiments

上一篇:Self-paced Compensatory Deep Boltzmann Machine for Semi-Structured Document Embedding

下一篇:Sequence Prediction with Unlabeled Data by Reward Function Learning

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...