资源论文Proximal Quasi-Newton for Computationally Intensive -regularized M -estimators

Proximal Quasi-Newton for Computationally Intensive -regularized M -estimators

2020-01-19 | |  60 |   42 |   0

Abstract

We consider the class of optimization problems arising from computationally intensive 图片.png -regularized M -estimators, where the function or gradient values are very expensive to compute. A particular instance of interest is the 图片.png -regularized MLE for learning Conditional Random Fields (CRFs), which are a popular class of statistical models for varied structured prediction problems such as sequence labeling, alignment, and classification with label taxonomy. 图片.png -regularized MLEs for CRFs are particularly expensive to optimize since computing the gradient values requires an expensive inference step. In this work, we propose the use of a carefully constructed proximal quasi-Newton algorithm for such computationally intensive M -estimation problems, where we employ an aggressive active set selection technique. In a key contribution of the paper, we show that the proximal quasi-Newton method is provably super-linearly convergent, even in the absence of strong convexity, by leveraging a restricted variant of strong convexity. In our experiments, the proposed algorithm converges considerably faster than current state-of-the-art on the problems of sequence labeling and hierarchical classification.

上一篇:A Latent Source Model for Online Collaborative Filtering

下一篇:Low Rank Approximation Lower Bounds in Row-Update Streams

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...