资源论文Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

Transformation-Grounded Image Generation Network for Novel 3D View Synthesis

2019-12-10 | |  91 |   50 |   0

Abstract

We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Our approach fifirst explicitly infers the parts of the geometry visible both in the input and novel views and then casts the remaining synthesis problem as image completion. Specifically, we both predict a flflow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves signifificantly better results compared to existing methods.

上一篇:WSISA: Making Survival Prediction from Whole Slide Histopathological Images

下一篇:Discriminative Covariance Oriented Representation Learning for Face Recognition with Image Sets

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...