资源算法pytorch-cnn-finetune

pytorch-cnn-finetune

2019-09-16 | |  94 |   0 |   0

Fine-tune pretrained Convolutional Neural Networks with PyTorch.

PyPI

Features

  • Gives access to the most popular CNN architectures pretrained on ImageNet.

  • Automatically replaces classifier on top of the network, which allows you to train a network with a dataset that has a different number of classes.

  • Allows you to use images with any resolution (and not only the resolution that was used for training the original model on ImageNet).

  • Allows adding a Dropout layer or a custom pooling layer.

Supported architectures and models

From torchvision package:

  • ResNet (resnet18resnet34resnet50resnet101resnet152)

  • DenseNet (densenet121densenet169densenet201densenet161)

  • Inception v3 (inception_v3)

  • VGG (vgg11vgg11_bnvgg13vgg13_bnvgg16vgg16_bnvgg19vgg19_bn)

  • SqueezeNet (squeezenet1_0squeezenet1_1)

  • AlexNet (alexnet)

From Pretrained models for PyTorch package:

  • ResNeXt (resnext101_32x4dresnext101_64x4d)

  • NASNet-A Large (nasnetalarge)

  • NASNet-A Mobile (nasnetamobile)

  • Inception-ResNet v2 (inceptionresnetv2)

  • Dual Path Networks (dpn68dpn68bdpn92dpn98dpn131dpn107)

  • Inception v4 (inception_v4)

  • Xception (xception)

  • Squeeze-and-Excitation Networks (senet154se_resnet50se_resnet101se_resnet152se_resnext50_32x4dse_resnext101_32x4d)

Requirements

  • Python 3.5+

  • PyTorch 0.3+

Installation

pip install cnn_finetune

Major changes:

Version 0.4

  • Default value for pretrained argument in make_model is changed from False to True. Now call make_model('resnet18', num_classes=10) is equal to make_model('resnet18', num_classes=10, pretrained=True)

Example usage:

Make a model with ImageNet weights for 10 classes

from cnn_finetune import make_model

model = make_model('resnet18', num_classes=10, pretrained=True)

Make a model with Dropout

model = make_model('nasnetalarge', num_classes=10, pretrained=True, dropout_p=0.5)

Make a model with Global Max Pooling instead of Global Average Pooling

import torch.nn as nn

model = make_model('inceptionresnetv2', num_classes=10, pretrained=True, pool=nn.AdaptiveMaxPool2d(1))

Make a VGG16 model that takes images of size 256x256 pixels

VGG and AlexNet models use fully-connected layers, so you have to additionally pass the input size of images when constructing a new model. This information is needed to determine the input size of fully-connected layers.

model = make_model('vgg16', num_classes=10, pretrained=True, input_size=(256, 256))

Make a VGG16 model that takes images of size 256x256 pixels and uses a custom classifier

import torch.nn as nn

def make_classifier(in_features, num_classes):
    return nn.Sequential(
        nn.Linear(in_features, 4096),
        nn.ReLU(inplace=True),
        nn.Linear(4096, num_classes),
    )

model = make_model('vgg16', num_classes=10, pretrained=True, input_size=(256, 256), classifier_factory=make_classifier)

Show preprocessing that was used to train the original model on ImageNet

>> model = make_model('resnext101_64x4d', num_classes=10, pretrained=True)
>> print(model.original_model_info)
ModelInfo(input_space='RGB', input_size=[3, 224, 224], input_range=[0, 1], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
>> print(model.original_model_info.mean)
[0.485, 0.456, 0.406]

CIFAR10 Example

See examples/cifar10.py file (requires PyTorch 0.4).

上一篇:Visual question answering

下一篇:attn2d

用户评价
全部评价

热门资源

  • Keras-ResNeXt

    Keras ResNeXt Implementation of ResNeXt models...

  • seetafaceJNI

    项目介绍 基于中科院seetaface2进行封装的JAVA...

  • spark-corenlp

    This package wraps Stanford CoreNLP annotators ...

  • capsnet-with-caps...

    CapsNet with capsule-wise convolution Project ...

  • inferno-boilerplate

    This is a very basic boilerplate example for pe...