资源论文GRADIENTS AS FEATURES FOR DEEP REPRESENTA -TION LEARNING

GRADIENTS AS FEATURES FOR DEEP REPRESENTA -TION LEARNING

2020-01-02 | |  55 |   47 |   0

Abstract

We address the challenging problem of deep representation learning–the efficient adaption of a pre-trained deep network to different tasks. Specifically, we propose to explore gradient-based features. These features are gradients of the model parameters with respect to a task-specific loss given an input sample. Our key innovation is the design of a linear model that incorporates both gradient features and the activation of the network. We show that our model provides a local linear approximation to a underlying deep model, and discuss important theoretical insights. Moreover, we present an efficient algorithm for the training and inference of our model without computing the actual gradients. Our method is evaluated across a number of representation learning tasks on several datasets and using different network architectures. Strong results are obtained in all settings, and are well-aligned with our theoretical insights.

上一篇:UNDERSTANDING THE LIMITATIONS OF VARIATIONALM UTUAL INFORMATION ESTIMATORS

下一篇:DIFFERENTIATION OF BLACKBOX COMBINATORIALS OLVERS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...