资源论文Inductive Two-layer Modeling with Parametric Bregman Transfer

Inductive Two-layer Modeling with Parametric Bregman Transfer

2020-03-16 | |  53 |   45 |   0

Abstract

Latent prediction models, exemplified by multilayer networks, employ hidden variables that automate abstract feature discovery. They typically pose nonconvex optimization problems and effective semi-definite programming (SDP) relaxations have been developed to enable global solutions (Aslan et al., 2014). However, these models rely on nonparametric training of layer-wise kernel representations, and are therefore restricted transductive learning which slows down test prediction. In this paper, we develop a new inductive learning framework for parametric transfer functions using matching losses. The result for ReLU utilizes completely positive matrices, and the inductive learner not only delivers superior accuracy but also offers an order of magnitude speedup over SDP with constant approximation guarantees.

上一篇:Robust and Scalable Models of Microbiome Dynamics

下一篇:Goodness-of-Fit Testing for Discrete Distributions via Stein Discrepancy

用户评价
全部评价

热门资源

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...