Abstract
We address the problem of transferring motion between
captured 4D models. We particularly focus on human subjects for which the ability to automatically augment 4D
datasets, by propagating movements between subjects, is of
interest in a great deal of recent vision applications that
builds on human visual corpus. Given 4D training sets for
two subjects for which a sparse set of corresponding keyposes are known, our method is able to transfer a newly
captured motion from one subject to the other. With the
aim to generalize transfers to input motions possibly very
diverse with respect to the training sets, the method contributes with a new transfer model based on non-linear pose
interpolation. Building on Gaussian process regression,
this model intends to capture and preserve individual motion properties, and thereby realism, by accounting for pose
inter-dependencies during motion transfers. Our experiments show visually qualitative, and quantitative, improvements over existing pose-mapping methods and confirm the
generalization capabilities of our method compared to state
of the art.