Abstract
3D shape models are naturally parameterized using vertices and faces, i.e., composed of polygons forming a surface. However, current 3D learning paradigms for predictive and generative tasks using convolutional neural networks focus on a voxelized representation of the object.
Lifting convolution operators from the traditional 2D to
3D results in high computational overhead with little additional benefit as most of the geometry information is contained on the surface boundary. Here we study the problem of directly generating the 3D shape surface of rigid
and non-rigid shapes using deep convolutional neural networks. We develop a procedure to create consistent ‘geometry images’ representing the shape surface of a category
of 3D objects. We then use this consistent representation
for category-specific shape surface generation from a parametric representation or an image by developing novel extensions of deep residual networks for the task of geometry
image generation. Our experiments indicate that our network learns a meaningful representation of shape surfaces
allowing it to interpolate between shape orientations and
poses, invent new shape surfaces and reconstruct 3D shape
surfaces from previously unseen images. Our code is available at https://github.com/sinhayan/surfnet