资源论文Risk Bounds for Transferring Representations With and Without Fine-Tuning

Risk Bounds for Transferring Representations With and Without Fine-Tuning

2020-03-09 | |  75 |   55 |   0

Abstract

A popular machine learning strategy is the transfer of a representation (i.e. a feature extraction function) learned on a source task to a target task Examples include the re-use of neural network weights or word embeddings. We develop sufficient conditions for the success of this approach. If the representation learned from the source task is fixed, we identify conditions on how the tasks relate to obtain an upper bound on target task risk via a VC dimension-based argument. We then consider using the representation from the source task to construct a prior, which is fine-tuned using target task data. We give a PAC-Bayes target task risk bound in this setting under suitable conditions. We show examples of our bounds using feedforward neural networks. Our results motivate a practical approach to weight transfer, which we validate with experiments.

上一篇:Learning the Structure of Generative Models without Labeled Data

下一篇:Analysis and Optimization of Graph Decompositions by Lifted Multicuts

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...