资源论文PANDA: Pose Aligned Networks for Deep Attribute Modeling

PANDA: Pose Aligned Networks for Deep Attribute Modeling

2019-12-11 | |  85 |   53 |   0

Abstract

We propose a method for inferring human attributes (such as gender, hair style, clothes style, expression, action) from images of people under large variation of viewpoint, pose, appearance, articulation and occlusion. Convolutional Neural Nets (CNN) have been shown to perform very well on large scale object recognition problems [15]. In the context of attribute classifification, however, the signal is often subtle and it may cover only a small part of the image, while the image is dominated by the effects of pose and viewpoint. Discounting for pose variation would require training on very large labeled datasets which are not presently available. Part-based models, such as poselets [4] and DPM [12] have been shown to perform well for this problem but they are limited by shallow low-level features. We propose a new method which combines part-based models and deep learning by training pose-normalized CNNs. We show substantial improvement vs. state-of-the-art methods on challenging attribute classifification tasks in unconstrained settings. Experiments confifirm that our method outperforms both the best part-based methods on this problem and conventional CNNs trained on the full bounding box of the person.

上一篇:Time-Mapping Using Space-Time Saliency

下一篇:Relative Parts: Distinctive Parts for Learning Relative Attributes

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...