资源论文Learning a Distance Metric by Empirical Loss Minimization Wei Bian and Dacheng Tao

Learning a Distance Metric by Empirical Loss Minimization Wei Bian and Dacheng Tao

2019-11-12 | |  50 |   34 |   0
Abstract In this paper, we study the problem of learning a metric and propose a loss function based metric learning framework, in which the metric is estimated by minimizing an empirical risk over a training set. With mild conditions on the instance distribution and the used loss function, we prove that the empirical risk converges to its expected counterpart at rate of root-n. In addition, with the assumption that the best metric that minimizes the expected risk is bounded, we prove that the learned metric is consistent. Two example algorithms are presented by using the proposed loss function based metric learning framework, each of which uses a log loss function and a smoothed hinge loss function, respectively. Experimental results suggest the effectiveness of the proposed algorithms.

上一篇:Improving Performance of Topic Models by Variable Grouping

下一篇:A Hidden Markov Model Variant for Sequence Classi?cation? Sam Blasiak and Huzefa Rangwala

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...