Abstract
We present an approach to separating reflection from a
single image. The approach uses a fully convolutional network trained end-to-end with losses that exploit low-level
and high-level image information. Our loss function includes two perceptual losses: a feature loss from a visual
perception network, and an adversarial loss that encodes
characteristics of images in the transmission layers. We
also propose a novel exclusion loss that enforces pixel-level
layer separation. We create a dataset of real-world images
with reflection and corresponding ground-truth transmission layers for quantitative evaluation and model training.
We validate our method through comprehensive quantitative experiments and show that our approach outperforms
state-of-the-art reflection removal methods in PSNR, SSIM,
and perceptual user study. We also extend our method to
two other image enhancement tasks to demonstrate the generality of our approach