For Omniglot experiments, I directly attach omniglot 28x28 resized images in the git, which is created based on omniglot and maml.
For mini-Imagenet experiments, please download mini-Imagenet and put it in ./datas/mini-Imagenet and run proc_image.py to preprocess generate train/val/test datasets. (This process method is based on maml).
you can change -b parameter based on your GPU memory. Currently It will load my trained model, if you want to train from scratch, you can delete models by yourself.
Test
omniglot 5way 1 shot:
python omniglot_test_one_shot.py -w 5 -s 1
Other experiments' testings are similar.
Citing
If you use this code in your research, please use the following BibTeX entry.
@inproceedings{sung2018learning,
title={Learning to Compare: Relation Network for Few-Shot Learning},
author={Sung, Flood and Yang, Yongxin and Zhang, Li and Xiang, Tao and Torr, Philip HS and Hospedales, Timothy M},
booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
year={2018}
}