资源论文Clothing Co-Parsing by Joint Image Segmentation and Labeling

Clothing Co-Parsing by Joint Image Segmentation and Labeling

2019-12-16 | |  52 |   37 |   0

Abstract

This paper aims at developing an integrated system of clothing co-parsing, in order to jointly parse a set of clothing images (unsegmented but annotated with tags) into semantic confifigurations. We propose a data-driven framework consisting of two phases of inference. The fifirst phase, referred as image co-segmentation, iterates to extract consistent regions on images and jointly refifines the regions over all images by employing the exemplar-SVM (ESVM) technique [23]. In the second phase (i.e. region colabeling), we construct a multi-image graphical model by taking the segmented regions as vertices, and incorporate several contexts of clothing confifiguration (e.g., item location and mutual interactions). The joint label assignment can be solved using the effificient Graph Cuts algorithm. In addition to evaluate our framework on the Fashionista dataset [30], we construct a dataset called CCP consisting of 2098 high-resolution street fashion photos to demonstrate the performance of our system. We achieve 90.29% / 88.23% segmentation accuracy and 65.52% / 63.89% recognition rate on the Fashionista and the CCP datasets, respectively, which are superior compared with state-ofthe-art methods

上一篇:CID: Combined Image Denoising in Spatial and Frequency Domains Using Web Images

下一篇:Robust Estimation of 3D Human Poses from a Single Image

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...