资源算法pytorch-retraining

pytorch-retraining

2019-09-17 | |  67 |   0 |   0

pytorch-retraining

Transfer Learning shootout for PyTorch's model zoo (torchvision).

  • Load any pretrained model with custom final layer (num_classes) from PyTorch's model zoo in one line

model_pretrained, diff = load_model_merged('inception_v3', num_classes)
  • Retrain minimal (as inferred on load) or a custom amount of layers on multiple GPUs. Optionally with _Cyclical Learning Rate_ (Smith 2017).

final_param_names = [d[0] for d in diff]stats = train_eval(model_pretrained, trainloader, testloader, final_params_names)
  • Chart training_timeevaluation_time (fps), top-1 accuracy for varying levels of retraining depth (shallow, deep and from scratch)

chart | |:---:| | Transfer learning on example dataset Bee vs Ants with 2xK80 GPUs|

Results on more elaborate Dataset

num_classes = 23, slightly unbalanced, high variance in rotation and motion blur artifacts with 1xGTX1080Ti

chart_17 | |:---:| | Constant LR with momentum |

chart_17_clr | |:---:| | Cyclical Learning Rate |


上一篇:Noisy Networks for Exploration

下一篇:visual-interaction-networks-pytorch

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...