Abstract
Volumetric structures are frequently used as shape descrip- tors for 3D data. The capture of such data is being facilitated by devel- opments in multi-view video and range scanning, extending to sub jects that are alive and moving. In this paper, we examine vision-based model- ing and the related representation of moving articulated creatures using spines. We define a spine as a branching axial structure representing the shape and topology of a 3D ob ject’s limbs, and capturing the limbs’ correspondence and motion over time. Our spine concept builds on skeletal representations often used to de- scribe the internal structure of an articulated ob ject and the significant protrusions. The algorithms for determining both 2D and 3D skeletons generally use an ob jective function tuned to balance stability against the responsiveness to detail. Our representation of a spine provides for en- hancements over a 3D skeleton, afforded by temporal robustness and cor- respondence. We also introduce a probabilistic framework that is needed to compute the spine from a sequence of surface data. We present a practical implementation that approximates the spine’s joint probability function to reconstruct spines for synthetic and real sub jects that move.