Abstract
In this paper, we address the shape-from-shading problem by training deep networks with synthetic images. Unlike conventional approaches that combine deep learning
and synthetic imagery, we propose an approach that does
not need any external shape dataset to render synthetic images. Our approach consists of two synergistic processes:
the evolution of complex shapes from simple primitives, and
the training of a deep network for shape-from-shading. The
evolution generates better shapes guided by the network
training, while the training improves by using the evolved
shapes. We show that our approach achieves state-of-theart performance on a shape-from-shading benchmark.