Abstract
We present an approach which exploits the coupling between human actions and scene geometry. We investigate the use of human pose as a cue for single-view 3D scene understanding. Our method builds upon recent advances in still-image pose estimation to extract functional and geometric constraints about the scene. These constraints are then used to improve state-of-the-art single-view 3D scene understanding approaches. The proposed method is validated on a collection of monocular time- lapse sequences collected from YouTube and a dataset of still images of indoor scenes. We demonstrate that observing people performing differ- ent actions can significantly improve estimates of 3D scene geometry.