Abstract
Learning and analyzing 3D point clouds with deep networks is challenging due to the sparseness and irregularity
of the data. In this paper, we present a data-driven point
cloud upsampling technique. The key idea is to learn multilevel features per point and expand the point set via a multibranch convolution unit implicitly in feature space. The expanded feature is then split to a multitude of features, which
are then reconstructed to an upsampled point set. Our network is applied at a patch-level, with a joint loss function
that encourages the upsampled points to remain on the underlying surface with a uniform distribution. We conduct
various experiments using synthesis and scan data to evaluate our method and demonstrate its superiority over some
baseline methods and an optimization-based method. Results show that our upsampled points have better uniformity
and are located closer to the underlying surfaces.