Abstract
We present a unified deformation model for the markerless capture of human movement at multiple scales, including facial expressions, body motion, and hand gestures.
An initial model is generated by locally stitching together
models of the individual parts of the human body, which we
refer to as “Frank”. This model enables the full expression
of part movements, including face and hands, by a single
seamless model. We capture a dataset of people wearing
everyday clothes and optimize the Frank model to create
“Adam”: a calibrated model that shares the same skeleton
hierarchy as the initial model with a simpler parameterization. Finally, we demonstrate the use of these models for
total motion tracking in a multiview setup, simultaneously
capturing the large-scale body movements and the subtle
face and hand motion of a social group of people