Abstract
Real scenes are full of specularities (highlights and reflec- tions), and yet most vision algorithms ignore them. In order to capture the appearance of realistic scenes, we need to model specularities as sep- arate layers. In this paper, we study the behavior of specularities in static scenes as the camera moves, and describe their dependence on varying surface geometry, orientation, and scene point and camera locations. For a rectilinear camera motion with constant velocity, we study how the specular motion deviates from a straight tra jectory (disparity deviation) and how much it violates the epipolar constraint (epipolar deviation). Surprisingly, for surfaces that are convex or not highly undulating, these deviations are usually quite small. We also study the appearance of spec- ularities, i.e., how they interact with the body refiection, and with the usual occlusion ordering constraints applicable to difiuse opaque layers. We present a taxonomy of specularities based on their photometric prop- erties as a guide for designing separation techniques. Finally, we propose a technique to extract specularities as a separate layer, and demonstrate it using an image sequence of a complex scene.