资源论文Pose Guided Person Image Generation

Pose Guided Person Image Generation

2020-02-12 | |  54 |   36 |   0

Abstract

 This paper proposes the novel Pose Guided Person Generation Network (PG2 ) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG2 utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128?4 re-identification images and 256?56 fashion photos show that our model generates high-quality person images with convincing details.

上一篇:Learning a Multi-View Stereo Machine

下一篇:SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...