资源论文A POMDP Extension with Belief-dependent Rewards

A POMDP Extension with Belief-dependent Rewards

2020-01-06 | |  62 |   39 |   0

Abstract

Partially Observable Markov Decision Processes (POMDPs) model sequential decision-making problems under uncertainty and partial observability. Unfortunately, some problems cannot be modeled with state-dependent reward functions, e.g., problems whose objective explicitly implies reducing the uncertainty on the state. To that end, we introduce图片.pngPOMDPs, an extension of POMDPs where the reward function 图片.png depends on the belief state. We show that, under the common assumption that 图片.png is convex, the value function is also convex, what makes it possible to (1) approximate 图片.png arbitrarily well with a piecewise linear and convex (PWLC) function, and (2) use state-of-the-art exact or approximate solving algorithms with limited changes.

上一篇:Structured Determinantal Point Processes

下一篇:Large Margin Learning of Upstream Scene Understanding Models

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...