Abstract
We propose an end-to-end architecture for joint 2D and 3D human pose estimation in natural images. Key to our approach is the generation and scoring of a number of pose proposals per image, which allows us to predict 2D and 3D pose of multiple people simultaneously. Hence, our approach does not require an approximate localization of the humans for initialization. Our architecture, named LCRNet, contains 3 main components: 1) the pose proposal generator that suggests potential poses at different locations in the image; 2) a classififier that scores the different pose proposals; and 3) a regressor that refifines pose proposals both in 2D and 3D. All three stages share the convolutional feature layers and are trained jointly. The fifinal pose estimation is obtained by integrating over neighboring pose hypotheses, which is shown to improve over a standard non maximum suppression algorithm. Our approach signifificantly outperforms the state of the art in 3D pose estimation on Human3.6M, a controlled environment. Moreover, it shows promising results on real images for both single and multiperson subsets of the MPII 2D pose benchmark