Abstract
Interpreting 3D data such as point clouds or surface
meshes depends heavily on the scale of observation. Yet,
existing algorithms for shape detection rely on trial-anderror parameter tunings to output configurations representative of a structural scale. We present a framework to automatically extract a set of representations that capture the
shape and structure of man-made objects at different key
abstraction levels. A shape-collapsing process first generates a fine-to-coarse sequence of shape representations by
exploiting local planarity. This sequence is then analyzed to
identify significant geometric variations between successive
representations through a supervised energy minimization.
Our framework is flexible enough to learn how to detect
both existing structural formalisms such as the CityGML
Levels Of Details, and expert-specified levels of abstraction.
Experiments on different input data and classes of manmade objects, as well as comparisons with existing shape
detection methods, illustrate the strengths of our approach
in terms of efficiency and flexibility.