Abstract
We present a new approach to 3D object representation where the geometry of an object is encoded directly into the weights and biases of a second ‘mapping’ network. This mapping network can be used to reconstruct an object by applying its encoded transformation to points randomly sampled from a simple geometric space, such as the unit sphere. Our experiments examine the effectiveness of our method on a subset of the ShapeNet dataset. We find that this representation can reconstruct objects with accuracy equal to or exceeding state-of-the-art methods with orders of magnitude fewer parameters. Our smallest reconstruction network has only about 7000 parameters and shows reconstruction quality on par with stateof-the-art object representation architectures with millions of parameters. Further experiments show that the space of functions learned meaningfully captures the features of the encoded objects, enabling feature mixing through the composition of these functions.