资源论文Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting

2019-12-27 | |  78 |   40 |   0

Abstract

We present an unsupervised visual feature learning algo-rithm driven by context-based pixel prediction. By analogywith auto-encoders, we propose Context Encoders – a con-volutional neural network trained to generate the contentsof an arbitrary image region conditioned on its surround-ings. In order to succeed at this task, context encodersneed to both understand the content of the entire image,as well as produce a plausible hypothesis for the missing part(s). When training context encoders, we have experi-mented with both a standard pixel-wise reconstruction loss,as well as a reconstruction plus an adversarial loss. Thelatter produces much sharper results because it can betterhandle multiple modes in the output. We found that a con-text encoder learns a representation that captures not justappearance but also the semantics of visual structures. Wequantitatively demonstrate the effectiveness of our learnedfeatures for CNN pre-training on classification, detection,and segmentation tasks. Furthermore, context encoders canbe used for semantic inpainting tasks, either stand-alone or as initialization for non-parametric methods.

上一篇:DenseCap: Fully Convolutional Localization Networks for Dense Captioning

下一篇:Predicting Motivations of Actions by Leveraging Text

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...