资源论文Exponential Family Sparse Coding with Applications to Self-taught Learning

Exponential Family Sparse Coding with Applications to Self-taught Learning

2019-11-15 | |  55 |   36 |   0

Abstract Sparse coding is an unsupervised learning algorithm for fifinding concise, slightly higher-level representations of inputs, and has been successfully applied to self-taught learning, where the goal is to use unlabeled data to help on a supervised learning task, even if the unlabeled data cannot be associated with the labels of the supervised task [Raina et al., 2007]. However, sparse coding uses a Gaussian noise model and a quadratic loss function, and thus performs poorly if applied to binary valued, integer valued, or other non-Gaussian data, such as text. Drawing on ideas from generalized linear models (GLMs), we present a generalization of sparse coding to learning with data drawn from any exponential family distribution (such as Bernoulli, Poisson, etc). This gives a method that we argue is much better suited to model other data types than Gaussian. We present an algorithm for solving the L1- regularized optimization problem defifined by this model, and show that it is especially effificient when the optimal solution is sparse. We also show that the new model results in signifificantly improved self-taught learning performance when applied to text classifification and to a robotic perception task

上一篇:Efficient Skill Learning using Abstraction Selection

下一篇:Relation Regularized Matrix Factorization

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...