Abstract
We describe Human Mesh Recovery (HMR), an end-toend framework for reconstructing a full 3D mesh of a human body from a single RGB image. In contrast to most
current methods that compute 2D or 3D joint locations, we
produce a richer and more useful mesh representation that
is parameterized by shape and 3D joint angles. The main
objective is to minimize the reprojection loss of keypoints,
which allows our model to be trained using in-the-wild images that only have ground truth 2D annotations. However,
the reprojection loss alone is highly underconstrained. In
this work we address this problem by introducing an adversary trained to tell whether human body shape and pose
parameters are real or not using a large database of 3D
human meshes