资源论文Is Q-learning Provably Efficient?

Is Q-learning Provably Efficient?

2020-02-14 | |  182 |   53 |   0

Abstract

 Model-free reinforcement learning (RL) algorithms, such as Q-learning, directly parameterize and update value functions or policies without explicitly modeling the environment. They are typically simpler, more flexible to use, and thus more prevalent in modern deep RL than model-based approaches. However, empirical work has suggested that model-free algorithms may require more samples to learn [7, 22]. The theoretical question of “whether model-free algorithms can be made sample efficient” is one of the most fundamental questions in RL, and remains unsolved even in the basic scenario with finitely many states and actions. We prove that, in  an episodic MDP setting, Q-learning with UCB exploration achieves regret image.png, where S and A are the numbers of states and actions, H is the number of steps per episode, and T is the total number of steps. This sample efficiency matches the optimal  regret that can be achieved by any model-based approach, up to a single image.png factor. To the best of  our knowledge, this is the first analysis in the model-free setting that establishes image.png regret without requiring access to a “simulator.”

上一篇:Regularizing by the Variance of the Activations’ Sample-Variances

下一篇:Mesh-TensorFlow: Deep Learning for Supercomputers

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...