资源论文Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs

Problem Dependent Reinforcement Learning Bounds Which Can Identify Bandit Structure in MDPs

2020-03-11 | |  54 |   38 |   0

Abstract

In order to make good decision under uncertainty an agent must learn from observations. To do so, two of the most common frameworks are Contextual Bandits and Markov Decision Processes (MDPs). In this paper, we study whether there exist algorithms for the more general framework (MDP) which automatically provide the best performance bounds for the specific problem at hand without user intervention and without modifying the algorithm. In particular, it is found that a very minor variant of a recently proposed reinforcement learning algorithm for MDPs already p matches the best possible regret bound O?( SAT ) in the dominant term if deployed on a tabular Contextual Bandit problem despite the agent being agnostic to such setting.

上一篇:Continuous and Discrete-time Accelerated Stochastic Mirror Descent for Strongly Convex Functions

下一篇:An Algorithmic Framework of Variable Metric Over-Relaxed Hybrid Proximal Extra-Gradient Method

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...