It is a reproduced version of maml, which is implemented with PyTorch 1.2.0 and support Multi-GPUs both in Meta-training phase and Meta-testing phase.
All the hyper-parameters and tricks, e.g. gradient clip, are strictly consistent with the original paper Model-Agnostic Meta-Learning for Fast Adaptation of Deep NetworksHERE and its Tensorflow ImplementationHERE.
Platform
python: 3.7
Pytorch: 1.2.0
Howto
Downloading MiniImagenet dataset
Changing the Param.root in train_mini_adam.py with your own root of the MiniImagenet dataset
python train_mini_adam.py
Comparison to original MAML implementation for miniImageNet