Abstract
This paper proposes a deep cascade network to
generate 3D geometry of an object on a point
cloud, consisting of a set of permutation-insensitive
points. Such a surface representation is easy
to learn from, but inhibits exploiting rich lowdimensional topological manifolds of the object shape due to lack of geometric connectivity. For
benefiting from its simple structure yet utilizing
rich neighborhood information across points, this
paper proposes a two-stage cascade model on point
sets. Specifically, our method adopts the state-ofthe-art point set autoencoder to generate a sparsely
coarse shape first, and then locally refines it by encoding neighborhood connectivity on a graph representation. An ensemble of sparse refined surface is designed to alleviate the suffering from local minima caused by modeling complex geometric manifolds. Moreover, our model develops a
dynamically-weighted loss function for jointly penalizing the generation output of cascade levels at
different training stages in a coarse-to-fine manner. Comparative evaluation on the publicly benchmarking ShapeNet dataset demonstrates superior
performance of the proposed model to the state-ofthe-art methods on both single-view shape reconstruction and shape autoencoding applications