资源论文Worst-Case Regret Bounds for Exploration via Randomized Value Functions

Worst-Case Regret Bounds for Exploration via Randomized Value Functions

2020-02-20 | |  55 |   36 |   0

Abstract

This paper studies a recent proposal to use randomized value functions to drive exploration in reinforcement learning. These randomized value functions are generated by injecting random noise into the training data, making the approach compatible with many popular methods for estimating parameterized value functions. By providing a worst-case regret bound for tabular finite-horizon Markov decision processes, we show that planning with respect to these randomized value functions can induce provably efficient exploration.

上一篇:Compacting, Picking and Growing for Unforgetting Continual Learning

下一篇:Convergence Guarantees for Adaptive Bayesian Quadrature Methods

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...