资源论文Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace

Gradient-Based Meta-Learning with Learned Layerwise Metric and Subspace

2020-03-16 | |  51 |   43 |   0

Abstract

Gradient-based meta-learning methods leverage gradient descent to learn the commonalities among various tasks. While previous such methods have been successful in meta-learning tasks, they resort to simple gradient descent during metatesting. Our primary contribution is the MT-net, which enables the meta-learner to learn on each layer’s activation space a subspace that the taskspecific learner performs gradient descent on. Additionally, a task-specific learner of an MT-net performs gradient descent with respect to a metalearned distance metric, which warps the activation space to be more sensitive to task identity. We demonstrate that the dimension of this learned subspace reflects the complexity of the task-specific learner’s adaptation task, and also that our model is less sensitive to the choice of i tial learning rates than previous gradient-based meta-learning methods. Our method achieves state-of-the-art or comparable performance on few-shot classification and regression tasks.

上一篇:Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

下一篇:Design of Experiments for Model Discrimination Hybridising Analytical and Data-Driven Approaches

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...