资源论文A B A SE LI NE FO RF EW- SH OTI MA GE CL AS SI FI CAT IO N

A B A SE LI NE FO RF EW- SH OTI MA GE CL AS SI FI CAT IO N

2020-01-02 | |  74 |   47 |   0

Abstract

A BA S E L I N E F O R F E WS H OTI M A G E C L A S S I F I C AT I O N Anonymous authors Paper under double-blind review ABSTRACT Fine-tuning a deep network trained with the standard cross-entropy loss is a strong baseline for few-shot learning. When fine-tuned transductively, this outperforms the current state-of-the-art on standard datasets such as Mini-Imagenet, TieredImagenet, CIFAR-FS and FC-100 with the same hyper-parameters. The simplicity of this approach enables us to demonstrate the first few-shot learning results on the Imagenet-21k dataset. We find that using a large number of meta-training classes results in high few-shot accuracies even for a large number of few-shot classes. We do not advocate our approach as the solution for few-shot learning, but simply use the results to highlight limitations of current benchmarks and few-shot protocols. We perform extensive studies on benchmark datasets to propose a metric that quantifies the “hardness” of a few-shot episode. This metric can be used to report the performance of few-shot algorithms in a more systematic way.

上一篇:DEEP PROBABILISTIC SUBSAMPLING FORTASK -ADAPTIVE COMPRESSED SENSING

下一篇:EXPLORING MODEL -BASED PLANNING WITH POLICYN ETWORKS

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...