资源论文RGBD-GAN: UNSUPERVISED 3D REPRESENTATIONL EARNING FROM NATURAL IMAGE DATASETS VIARGBD IMAGE SYNTHESIS

RGBD-GAN: UNSUPERVISED 3D REPRESENTATIONL EARNING FROM NATURAL IMAGE DATASETS VIARGBD IMAGE SYNTHESIS

2020-01-02 | |  87 |   61 |   0

Abstract
Understanding three-dimensional (3D) geometries from two-dimensional (2D) images without any labeled information is promising for understanding the real world without incurring annotation cost. We herein propose a novel generative model, RGBD-GAN, which achieves unsupervised 3D representation learning from 2D images. The proposed method enables camera parameter–conditional image generation and depth image generation without any 3D annotations such as camera poses or depth. We use an explicit 3D consistency loss for two RGBD images generated from different camera parameters in addition to the ordinal GAN objective. The loss is simple yet effective for any type of image generator such as the DCGAN and StyleGAN to be conditioned on camera parameters. Through experiments, we demonstrated that the proposed method could learn 3D representations from 2D images with various generator architectures.

上一篇:UNSUPERVISED CLUSTERING USING PSEUDO -SEMI -SUPERVISED LEARNING

下一篇:FINDING AND VISUALIZING WEAKNESSES OFD EEP REINFORCEMENT LEARNING AGENTS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...