Abstract.
We present a unified framework for separating specular and diffuse refiection components in images and videos of textured scenes. This can be used for specularity removal and for independently processing, filtering, and recom- bining the two components. Beginning with a partial separation provided by an illumination-dependent color space, the challenge is to complete the separation using spatio-temporal information. This is accomplished by evolving a partial dif- ferential equation (PDE) that iteratively erodes the specular component at each pixel. A family of PDEs appropriate for differing image sources (still images vs. videos), differing prior information (e.g., highly vs. lightly textured scenes), or differing prior computations (e.g., optical flow) is introduced. In contrast to many other methods, explicit segmentation and/or manual intervention are not required. We present results on high-quality images and video acquired in the laboratory in addition to images taken from the Internet. Results on the latter demonstrate robustness to low dynamic range, JPEG artifacts, and lack of knowledge of il- luminant color. Empirical comparison to physical removal of specularities using polarization is provided. Finally, an application termed dichromatic editing is pre- sented in which the diffuse and the specular components are processed indepen- dently to produce a variety of visual effects.