Abstract
This paper studies a combination of generative Markovrandom field (MRF) models and discriminatively traineddeep convolutional neural networks (dCNNs) for synthesiz-ing 2D images. The generative MRF acts on higher-levelsof a dCNN feature pyramid, controling the image layoutat an abstract level. We apply the method to both photographic and non-photo-realistic (artwork) synthesis tasks. The MRF regularizer prevents over-excitation artifacts and reduces implausible feature mixtures common to previous dCNN inversion approaches, permitting synthezing photo-graphic content with increased visual plausibility. Unlikestandard MRF-based texture synthesis, the combined system can both match and adapt local features with considerable variability, yielding results far out of reach of classic generative MRF methods.