I use reduced in size ShuffleNet: in the original paper it has more layers. But it is easy to change a couple of parameters in shufflenet/CONSTANTS.py to make it like the original.
For the input pipeline I use tf.data.TFRecordDataset.
For data augmentation I use 56x56 sized random crops and random color manipulations.
I use a reduce-on-plateau learning rate scheduler.
How to use it
Assuming that Tiny ImageNet data is in /home/ubuntu/data/tiny-imagenet-200/ steps are
cd ShuffleNet-tensorflow.
python tiny_imagenet/move_data.py to slightly change the folder structure of the data.
python image_dataset_to_tfrecords.py to convert the dataset to tfrecords format.
(optional) If you want to change the network's length, edit the number of ShuffleNet Units in shufflenet/CONSTANTS.py.
python train.py to begin training. Evaluation is after each epoch.
logs and the saved model will be in logs/run0 and saved/run0.
To train on your dataset, you need to change a few values in shufflenet/CONSTANTS.py file.