just run python miniimagenet_train.py and the running screenshot is as follows:
If your reproducation perf. is not so good, maybe you can enlarge your training epoch to get longer training. And MAML is notorious for its hard training. Therefore, this implementation only provide you a basic start point to begin your research. and the performance below is true and achieved on my machine.
Benchmark
Model
Fine Tune
5-way Acc.
20-way Acc.
1-shot
5-shot
1-shot
5-shot
Matching Nets
N
43.56%
55.31%
17.31%
22.69%
Meta-LSTM
43.44%
60.60%
16.70%
26.06%
MAML
Y
48.7%
63.11%
16.49%
19.29%
Ours
Y
46.2%
60.3%
-
-
Ominiglot
Howto
run python omniglot_train.py, the program will download omniglot dataset automatically.
decrease the value of args.task_num to fit your GPU memory capacity.
For 5-way 1-shot exp., it allocates nearly 3GB GPU memory.