资源论文Total Variation Blind Deconvolution: The Devil is in the Details

Total Variation Blind Deconvolution: The Devil is in the Details

2019-12-12 | |  71 |   47 |   0

Abstract

In this paper we study the problem of blind deconvolution. Our analysis is based on the algorithm of Chan and Wong [2] which popularized the use of sparse gradient priors via total variation. We use this algorithm because many methods in the literature are essentially adaptations of this framework. Such algorithm is an iterative alternating energy minimization where at each step either the sharp image or the blur function are reconstructed. Recent work of Levin et al. [14] showed that any algorithm that tries to minimize that same energy would fail, as the desired solution has a higher energy than the no-blur solution, where the sharp image is the blurry input and the blur is a Dirac delta. However, experimentally one can observe that Chan and Wong’s algorithm converges to the desired solution even when initialized with the no-blur one. We provide both analysis and experiments to resolve this paradoxical conundrum. We find that both claims are right. The key to understanding how this is possible lies in the details of Chan and Wong’s implementation and in how seemingly harmless choices result in dramatic effects. Our analysis reveals that the delayed scaling (normalization) in the iterative step of the blur kernel is fundamental to the convergence of the algorithm. This then results in a procedure that eludes the no-blur solution, despite it being a global minimum of the original energy. We introduce an adaptation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achievesa performance comparable to the state of the art. Blind deconvolution is the problem of recovering a signal and a degradation kernel from their noisy convolution. This problem is found in diverse fields such as astronomical imaging, medical imaging, (audio) signal processing, and image processing. Yet, despite over three decades of research in the field (see [11] and references therein), the design of a principled, stable and robust algorithm that can handle real images remains a challenge. However, presentday progress has shown that recent models for sharp images and blur kernels, such as total variation [18], can yield re-

上一篇:Speeding Up Tracking by Ignoring Features

下一篇:Spectral Clustering with Jensen-type kernels and their multi-point extensions

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...