资源论文Image-based Synthesis and Re-Synthesis of Viewpoints Guided by 3D Models

Image-based Synthesis and Re-Synthesis of Viewpoints Guided by 3D Models

2019-12-16 | |  71 |   42 |   0

Abstract

We propose a technique to use the structural information extracted from a set of 3D models of an object class to improve novel-view synthesis for images showing unknown instances of this class. These novel views can be used to “amplify” training image collections that typically contain only a low number of views or lack certain classes of views entirely (e. g. top views). We extract the correlation of position, normal, reflectance and appearance from computer-generated images of a few exemplars and use this information to infer new appearance for new instances. We show that our approach can improve performance of state-of-the-art detectors using real-world training data. Additional applications include guided versions of inpainting, 2D-to-3D conversion, superresolution and non-local smoothing.

上一篇:Deblurring Low-light Images with Light Streaks

下一篇:Parallax-tolerant Image Stitching

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...