资源论文Does Tail Label Help for Large-Scale Multi-Label Learning Tong Wei and Yu-Feng Li?

Does Tail Label Help for Large-Scale Multi-Label Learning Tong Wei and Yu-Feng Li?

2019-11-05 | |  107 |   42 |   0
Abstract Large-scale multi-label learning annotates relevant labels for unseen data from a huge number of candidate labels. It is well known that in large-scale multi-label learning, labels exhibit a long tail distribution in which a significant fraction of labels are tail labels. Nonetheless, how tail labels make impact on the performance metrics in large-scale multi-label learning was not explicitly quantified. In this paper, we disclose that whatever labels are randomly missing or misclassified, tail labels impact much less than common labels in terms of commonly used performance metrics (Top-k precision and nDCG@k). With the observation above, we develop a low-complexity large-scale multilabel learning algorithm with the goal of facilitating fast prediction and compact models by trimming tail labels adaptively. Experiments clearly verify that both the prediction time and the model size are significantly reduced without sacrificing much predictive performance for state-of-the-art approaches.

上一篇:Positive and Unlabeled Learning for Detecting Software Functional Clones with Adversarial Training

下一篇:Towards Enabling Binary Decomposition for Partial Label Learning?

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...