Abstract
We present the ?rst variational framework for multi-labe segmentation on the ray space of 4D light ?elds. For traditional segmentation of single images, features need to be ex tracted from the 2D projection of a three-dimensional scene. The associated loss of geometry information can cause severe problems, for example if different objects have a very similar visual appearance. In this work, we show that using a light ?eld instead of an image not only enables to train classi?ers which can overcome many of these problems, but also provides an optimal data structure for label optimization by implicitly providing scene geometry information. It is thus possible to consistently optimize label assignment over all views simultaneously. As a further contribution, we make all light ?elds available online with complete depth and segmentation ground truth data where available, and thus establish the ?rst benchmark data set for light ?eld analysis to facilitate competitive further development of al gorithms.