资源论文Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory

Meta-Learning by Adjusting Priors Based on Extended PAC-Bayes Theory

2020-03-16 | |  56 |   51 |   0

Abstract

In meta-learning an agent extracts knowledge from observed tasks, aiming to facilitate learning of novel future tasks. Under the assumption that future tasks are ‘related’ to previous tasks, accumulated knowledge should be learned in a way which captures the common structure across learned tasks, while allowing the learner sufficie flexibility to adapt to novel aspects of new tasks We present a framework for meta-learning that is based on generalization error bounds, allowing us to extend various PAC-Bayes bounds to metalearning. Learning takes place through the construction of a distribution over hypotheses based on the observed tasks, and its utilization for lea ing a new task. Thus, prior knowledge is incorporated through setting an experience-dependent prior for novel tasks. We develop a gradient-based algorithm which minimizes an objective function derived from the bounds and demonstrate its effectiveness numerically with deep neural net works. In addition to establishing the improved performance available through meta-learning, we demonstrate the intuitive way by which prior information is manifested at different levels of the network.

上一篇:Hierarchical Imitation and Reinforcement Learning

下一篇:Stochastic Variance-Reduced Cubic Regularized Newton Methods

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...