资源论文ACRITICAL ANALYSIS OF SELF -SUPERVISION ,ORWHAT WE CAN LEARN FROM ASINGLE IMAGE

ACRITICAL ANALYSIS OF SELF -SUPERVISION ,ORWHAT WE CAN LEARN FROM ASINGLE IMAGE

2019-12-30 | |  69 |   45 |   0

Abstract
We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training. We conclude that: (1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that (2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that (3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.

上一篇:IMAGE -GUIDED NEURAL OBJECT RENDERING

下一篇:HI LL OC: LOSSLESS IMAGE COMPRESSION WITH HI-ERARCHICAL LATENT VARIABLE MODELS

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...