Abstract
During the past few years we have witnessed the devel-opment of many methodologies for building and fitting Sta-tistical Deformable Models (SDMs). The construction ofaccurate SDMs requires careful annotation of images withregards to a consistent set of landmarks. However, the man-ual annotation of a large amount of images is a tedious,laborious and expensive procedure. Furthermore, for sev-eral deformable objects, e.g. human body, it is difficult todefine a consistent set of landmarks, and, thus, it becomesimpossible to train humans in order to accurately annotatea collection of images. Nevertheless, for the majority ofobjects, it is possible to extract the shape by object segmentation or even by shape drawing. In this paper, we show for the first time, to the best of our knowledge, that it is possible to construct SDMs by putting object shapes in dense correspondence. Such SDMs can be built with much less effort for a large battery of objects. Additionally, we showthat, by sampling the dense model, a part-based SDM canbe learned with its parts being in correspondence. We em-ploy our framework to develop SDMs of human arms andlegs, which can be used for the segmentation of the outlineof the human body, as well as to provide better and moreconsistent annotations for body joints.