pytorch-cnn-finetune
Gives access to the most popular CNN architectures pretrained on ImageNet.
Automatically replaces classifier on top of the network, which allows you to train a network with a dataset that has a different number of classes.
Allows you to use images with any resolution (and not only the resolution that was used for training the original model on ImageNet).
Allows adding a Dropout layer or a custom pooling layer.
ResNet (resnet18
, resnet34
, resnet50
, resnet101
, resnet152
)
DenseNet (densenet121
, densenet169
, densenet201
, densenet161
)
Inception v3 (inception_v3
)
VGG (vgg11
, vgg11_bn
, vgg13
, vgg13_bn
, vgg16
, vgg16_bn
, vgg19
, vgg19_bn
)
SqueezeNet (squeezenet1_0
, squeezenet1_1
)
AlexNet (alexnet
)
ResNeXt (resnext101_32x4d
, resnext101_64x4d
)
NASNet-A Large (nasnetalarge
)
NASNet-A Mobile (nasnetamobile
)
Inception-ResNet v2 (inceptionresnetv2
)
Dual Path Networks (dpn68
, dpn68b
, dpn92
, dpn98
, dpn131
, dpn107
)
Inception v4 (inception_v4
)
Xception (xception
)
Squeeze-and-Excitation Networks (senet154
, se_resnet50
, se_resnet101
, se_resnet152
, se_resnext50_32x4d
, se_resnext101_32x4d
)
Python 3.5+
PyTorch 0.3+
pip install cnn_finetune
Default value for pretrained
argument in make_model
is changed from False
to True
. Now call make_model('resnet18', num_classes=10)
is equal to make_model('resnet18', num_classes=10, pretrained=True)
from cnn_finetune import make_model model = make_model('resnet18', num_classes=10, pretrained=True)
model = make_model('nasnetalarge', num_classes=10, pretrained=True, dropout_p=0.5)
import torch.nn as nn model = make_model('inceptionresnetv2', num_classes=10, pretrained=True, pool=nn.AdaptiveMaxPool2d(1))
VGG and AlexNet models use fully-connected layers, so you have to additionally pass the input size of images when constructing a new model. This information is needed to determine the input size of fully-connected layers.
model = make_model('vgg16', num_classes=10, pretrained=True, input_size=(256, 256))
import torch.nn as nn def make_classifier(in_features, num_classes): return nn.Sequential( nn.Linear(in_features, 4096), nn.ReLU(inplace=True), nn.Linear(4096, num_classes), ) model = make_model('vgg16', num_classes=10, pretrained=True, input_size=(256, 256), classifier_factory=make_classifier)
>> model = make_model('resnext101_64x4d', num_classes=10, pretrained=True) >> print(model.original_model_info) ModelInfo(input_space='RGB', input_size=[3, 224, 224], input_range=[0, 1], mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) >> print(model.original_model_info.mean) [0.485, 0.456, 0.406]
See examples/cifar10.py file (requires PyTorch 0.4).
下一篇:attn2d
还没有评论,说两句吧!
热门资源
Keras-ResNeXt
Keras ResNeXt Implementation of ResNeXt models...
seetafaceJNI
项目介绍 基于中科院seetaface2进行封装的JAVA...
spark-corenlp
This package wraps Stanford CoreNLP annotators ...
capsnet-with-caps...
CapsNet with capsule-wise convolution Project ...
inferno-boilerplate
This is a very basic boilerplate example for pe...
智能在线
400-630-6780
聆听.建议反馈
E-mail: support@tusaishared.com