The original program shown in the above link assumes the batch size
is one. Apparently, this is not a "real" batch training. The
seq2seq_translation_batch_training.py &
seq2seq_translation_batch_training.ipynb show how to use batch training.
At the end of this tutorial, the author asked readers to run the code with a harder dataset. I run Resnet with the plant seeding classification dataset.
In my code, I show how to add multiple layers to the top of a deep
neural network model and how to use pretrained models in a Kaggle
kernel.
ImageFolderSplitter.py
Two classes, ImageFolderSplitter and DatasetFromFilename, are
provided in this file. They work like torchvision.datasets.ImageFolder,
but they can split a whole dataset into a training set and a validation
set.
image_transforms.py
ShiftTransform
A class simulates the height_shift_range and the width_shift_range of
ImageDataGenerator in Keras. This class is initialized by two
fractions, x and y, representing the fraction of width and the fraction
of height, respectively. In addition, this class translates an PIL Image
object. It should be used with other transforms in torchvision.
There's a class named RandomAffine in PyTorch can do the similar
things. However, after RandomAffine translating an image, black areas
(the color is specified by the parameter fillcolor) are left on the
image. Unlike RandomAffine, ShiftTransform fills the points outside the
boundaries using the points on the boundaries.