Abstract
In this paper, we address the problem of video rain removal by constructing deep recurrent convolutional networks. We visit the rain removal case by considering rain
occlusion regions, i.e. the light transmittance of rain streaks is low. Different from additive rain streaks, in such rain
occlusion regions, the details of background images are
completely lost. Therefore, we propose a hybrid rain model to depict both rain streaks and occlusions. With the
wealth of temporal redundancy, we build a Joint Recurrent
Rain Removal and Reconstruction Network (J4R-Net) that
seamlessly integrates rain degradation classification, spatial texture appearances based rain removal and temporal
coherence based background details reconstruction. The
rain degradation classification provides a binary map that
reveals whether a location is degraded by linear additive
streaks or occlusions. With this side information, the gate
of the recurrent unit learns to make a trade-off between
rain streak removal and background details reconstruction.
Extensive experiments on a series of synthetic and real
videos with rain streaks verify the superiority of the proposed method over previous state-of-the-art methods