Abstract
For several decades, image restoration remains an active research topic in low-level computer vision and hence new approaches are constantly emerging. However, many recently proposed algorithms achieve state-of-the-art performance only at the expense of very high computation time, which clearly limits their practical relevance. In this work, we propose a simple but effective approach with both high computational effificiency and high restoration quality. We extend conventional nonlinear reaction diffusion models by several parametrized linear fifilters as well as several parametrized inflfluence functions. We propose to train the parameters of the fifilters and the inflfluence functions through a loss based approach. Experiments show that our trained nonlinear reaction diffusion models largely benefifit from the training of the parameters and fifinally lead to the best reported performance on common test datasets for image restoration. Due to their structural simplicity, our trained models are highly effificient and are also well-suited for parallel computation on GPUs