资源论文Stochastic Proximal Algorithms for AUC Maximization

Stochastic Proximal Algorithms for AUC Maximization

2020-03-19 | |  53 |   47 |   0

Abstract

Stochastic optimization algorithms such as stochastic gradient descent (SGD) update the model sequentially with cheap per-iteration costs, making them amenable for large-scale data analysis. Most of the existing studies focus on the classification accuracy. However, these can not be directly applied to the important problems of maximizing the Area under the ROC curve (AUC) in imbalanced classification and bipartite ranking. In this paper, we develop a novel stochastic proximal algorithm for AUC maximization which is referred to as SPAM. Compared with the previous literature, our algorithm SPAM applies to a non-smooth penalty function, and achieves a convergence rate of O( logt t ) for strongly convex functions while both space and per-iteration costs are of one datum.

上一篇:Variance Regularized Counterfactual Risk Minimization via Variational Divergence Minimization

下一篇:Been There, Done That: Meta-Learning with Episodic Recall

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...