Abstract
Convolutional neural networks (CNNs) have massively
impacted visual recognition in 2D images, and are now
ubiquitous in state-of-the-art approaches. CNNs do not
easily extend, however, to data that are not represented by
regular grids, such as 3D shape meshes or other graphstructured data, to which traditional local convolution operators do not directly apply. To address this problem, we
propose a novel graph-convolution operator to establish
correspondences between filter weights and graph neighborhoods with arbitrary connectivity. The key novelty of
our approach is that these correspondences are dynamically computed from features learned by the network, rather
than relying on predefined static coordinates over the graph
as in previous work. We obtain excellent experimental results that significantly improve over previous state-of-theart shape correspondence results. This shows that our approach can learn effective shape representations from raw
input coordinates, without relying on shape descriptors