资源论文Fast Newton-type Methods for Total Variation Regularization

Fast Newton-type Methods for Total Variation Regularization

2020-02-27 | |  89 |   34 |   0

Abstract

Numerous applications in statistics, signal processing, and machine learning regularize using Total Variation (TV) penalties. We study anisotropic (?1 -based) TV and also a related ?2 -norm variant. We consider for both variants associated (1D) proximity operators, which lead to challenging optimization problems. We solve these problems by developing Newton-type methods that outperform the state-of-the-art algorithms. More importantly, our 1D-TV algorithms serve as building blocks for solving the harder task of computing 2(and higher)dimensional TV proximity. We illustrate the computational benefits of our methods by applying them to several applications: (i) image denoising; (ii) image deconvolution (by plugging in our TV solvers into publicly available software); and (iii) four variants of fused-lasso. The results show large speedups—and to support our claims, we provide software accompanying this paper.

上一篇:On the Use of Variational Inference for Learning Discrete Graphical Models

下一篇:Piecewise Bounds for Estimating Bernoulli-Logistic Latent Gaussian Models

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...