资源论文Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network

Photographic Text-to-Image Synthesis with a Hierarchically-nested Adversarial Network

2019-10-17 | |  68 |   43 |   0
Abstract This paper presents a novel method to deal with the challenging task of generating photographic images conditioned on semantic image descriptions. Our method introduces accompanying hierarchical-nested adversarial objectives inside the network hierarchies, which regularize mid-level representations and assist generator training to capture the complex image statistics. We present an extensile single-stream generator architecture to better adapt the jointed discriminators and push generated images up to high resolutions. We adopt a multi-purpose adversarial loss to encourage more effective image and text information usage in order to improve the semantic consistency and image fidelity simultaneously. Furthermore, we introduce a new visual-semantic similarity measure to evaluate the semantic consistency of generated images. With extensive experimental validation on three public datasets, our method signifi- cantly improves previous state of the arts on all datasets over different evaluation metrics.

上一篇:On the Duality Between Retinex and Image Dehazing

下一篇:PieAPP: Perceptual Image-Error Assessment through Pairwise Preference

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...