Asynchronous methods for Deep Reinforcement Learning
Implementation of the algorithms described in the paper Asynchronous methods for deep reinforcement learning (2016) with Keras and Tensorflow.
The implementation was done using threading and multiprocessing packages in order to compare their respective performances.
Finally, a single-thread version exploiting the gpu was proposed.
However, it does not harness the efficiency of the asynchronous
framework. It is still interesting to use as a benchmark.
References
OpenAI Gym Documentation : https://gym.openai.com/docs Demystifying Deep Reinforcement Learning : https://www.nervanasys.com/demystifying-deep-reinforcement-learning/ Asynchronous methods for deep reinforcement learning, Mnih,
Volodymyr and Badia, Adria Puigdomenech and Mirza, Mehdi and Graves,
Alex and Lillicrap, Timothy P and Harley, Tim and Silver, David and
Kavukcuoglu, Koray, arXiv preprint arXiv:1602.01783 (2016) Human-level control through deep reinforcement learning, Mnih,
Volodymyr and Kavukcuoglu, Koray and Silver, David and Rusu, Andrei A
and Veness, Joel and Bellemare, Marc G and Graves, Alex and Riedmiller,
Martin and Fidjeland, Andreas K and Ostrovski, Georg and others, Nature
vol. 518 (2015) Asynchronous RL in Tensorflow + Keras + OpenAI's Gym : https://github.com/coreylynch/async-rl DEEP REINFORCEMENT LEARNING : https://kaixhin.github.io/deep-reinforcement-learning/#/ Q-learning for Keras : https://github.com/farizrahman4u/qlearning4k