资源论文Early Active Learning via Robust Representation and Structured Sparsity

Early Active Learning via Robust Representation and Structured Sparsity

2019-11-11 | |  55 |   29 |   0
Abstract Labeling training data is quite time-consuming but essential for supervised learning models. To solve this problem, the active learning has been studied and applied to select the informative and representative data points for labeling. However, during the early stage of experiments, only a small number (or none) of labeled data points exist, thus the most representative samples should be selected ?rst. In this paper, we propose a novel robust active learning method to handle the early stage experimental design problem and select the most representative data points. Selecting the representative samples is an NP-hard problem, thus we employ the structured sparsity-inducing norm to relax the objective to an ef?cient convex formulation. Meanwhile, the robust sparse representation loss function is utilized to reduce the effect of outliers. A new ef?cient optimization algorithm is introduced to solve our non-smooth objective with low computational cost and proved global convergence. Empirical results on both single-label and multi-label classi?cation benchmark data sets show the promising results of our method.

上一篇:Meta-Interpretive Learning of Higher-Order Dyadic Datalog: Predicate Invention Revisited

下一篇:Annealed Importance Sampling for Structure Learning in Bayesian Networks

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...