To train your own model on 'Atari 2600 SpaceInvaders', simply run:
python run_dqn.py
To specify another environment, use --env flag, e.g:
python run_dqn.py --env Pong-v0
All available environments you can check here. Note, that current implementation supports environments only with raw pixels observations. Tested OpenAI Gym environments:
SpaceInvaders-v0
Pong-v0
To change amount of spawned threads, use --threads flag (by default = 8).
To use GPU instead of cpu, pass --gpu flag.
All available flags can be checked by: python run_dqn.py --help
To read TensorBoard logs, use: tensorboard --logdir=path/to/logdir
Trained models
To use pretrained agent, or change log folder, just use --logdir flag: