资源论文Generalization and Exploration via Randomized Value Functions

Generalization and Exploration via Randomized Value Functions

2020-03-05 | |  70 |   37 |   0

Abstract

We propose randomized least-squares value iteration (RLSVI) – a new reinforcement learning algorithm designed to explore and generalize efficiently via linearly parameterized value functions. We explain why versions of least-squares value iteration that use Boltzmann or ε-greedy exploration can be highly inefficient, and we present computational results that demonstrate dramatic efficiency gains enjoyed by RLSVI. Further, we establish an upper bound on the expected regret of RLSVI that demonstrates nearoptimality in a tabula rasa learning context. More broadly, our results suggest that randomized value functions offer a promising approach to tackling a critical challenge in reinforcement learning: synthesizing efficient exploration and effective generalization.

上一篇:Doubly Decomposing Nonparametric Tensor Regression

下一篇:Model-Free Imitation Learning with Policy Optimization

用户评价
全部评价

热门资源

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Joint Pose and Ex...

    Facial expression recognition (FER) is a challe...

  • dynamical system ...

    allows to preform manipulations of heavy or bul...