资源论文Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks

Human Shape from Silhouettes using Generative HKS Descriptors and Cross-Modal Neural Networks

2019-12-10 | |  41 |   40 |   0
Abstract In this work, we present a novel method for capturing human body shape from a single scaled silhouette. We combine deep correlated features capturing different 2D views, and embedding spaces based on 3D cues in a novel convolutional neural network (CNN) based architecture. We first train a CNN to find a richer body shape representation space from pose invariant 3D human shape descriptors. Then, we learn a mapping from silhouettes to this representation space, with the help of a novel architecture that exploits correlation of multi-view data during training time, to improve prediction at test time. We extensively validate our results on synthetic and real data, demonstrating significant improvements in accuracy as compared to the state-of-theart, and providing a practical system for detailed human body measurements from a single image

上一篇:High-Resolution Image Inpainting using Multi-Scale Neural Patch Synthesis

下一篇:Improving Interpretability of Deep Neural Networks with Semantic Information

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...