Abstract
Recent progresses on deep discriminative and generative
modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade
off generality for efficiency, which suffer from many issues,
such as shortage of generality (i.e., build one network per
texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying
visual effects). In this work, we focus on solving these issues
for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis
of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show
its applications with the stylization