资源论文Efficient Similarity Derived from Kernel-Based Transition Probability

Efficient Similarity Derived from Kernel-Based Transition Probability

2020-04-02 | |  75 |   50 |   0

Abstract

Semi-supervised learning effectively integrates labeled and unlabeled samples for classification, and most of the methods are founded on the pair-wise similarities between the samples. In this paper, we pro- pose methods to construct similarities from the probabilistic viewpoint, whilst the similarities have so far been formulated in a heuristic man- ner such as by k-NN. We first propose the kernel-based formulation of transition probabilities via considering kernel least squares in the prob- abilistic framework. The similarities are consequently derived from the kernel-based transition probabilities which are efficiently computed, and the similarities are inherently sparse without applying k-NN. In the case of multiple types of kernel functions, the multiple transition probabilities are also obtained correspondingly. From the probabilistic viewpoint, they can be integrated with prior probabilities, i.e ., linear weights, and we propose a computationally efficient method to optimize the weights in a discriminative manner, as in multiple kernel learning. The novel similar- ity is thereby constructed by the composite transition probability and it benefits the semi-supervised learning methods as well. In the various ex- periments on semi-supervised learning problems, the proposed methods demonstrate favorable performances, compared to the other methods, in terms of classification performances and computation time.

上一篇:Simultaneous Compaction and Factorization of Sparse Image Motion Matrices

下一篇:Augmented Attribute Representations

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...