Deepmind scores from the FullyConv policy in the release paper are shown for comparison.
The model here wasn't able to learn CollectMineralsAndGas or BuildMarines
In DefeatRoaches and DefeatZerglingsAndBanelings the result is not stable.
It took something like 5 runs the get the score for DefeatRoaches reported here.
Also the scores for those are still considerably worse than Deepmind scores.
Might be that at least hyperparameters here are off (and possibly other things).
Other environments seem more stable.
The training was done using one core of Tesla K80 -GPU per environment.
With PPO the scores were slightly better than A2C for tested
environments. However, the training time was much longer with PPO than
with A2C.
Maybe some other PPO-parameters would give faster training time.
With PPO the training seems more stable. The typical sigmoid shape in
A2C-learning cureves doesn't appear.
Note:
The training is not deterministic and the training time might vary even if nothing is changed.
For example, I tried to train MoveToBeacon 5 times with default parameters and 64 environments.
Here are the episode numbers when agent achieved score of 27 first time
Code is tested with OS X and Linux. About Windows don't know.
Let me know if there are issues.
References
I have borrowed some ideas from https://github.com/xhujoy/pysc2-agents (FullyConv-network etc.)
and Open AI's baselines (A2C and PPO) but the implementation here is different from those.
For parallel environments using the code from baselines adapted for sc2.