资源论文Robust Online Optimization of Reward-Uncertain MDPs

Robust Online Optimization of Reward-Uncertain MDPs

2019-11-12 | |  93 |   61 |   0

Abstract Imprecise-reward Markov decision processes (IRMDPs) are MDPs in which the reward function is only partially speci?ed (e.g., by some elicitation process). Recent work using minimax regret to solve IRMDPs has shown, despite their theoretical intractability, how the set of policies that are nondominated w.r.t. reward uncertainty can be exploited to accelerate regret computation. However, the number of nondominated policies is generally so large as to undermine this leverage. In this paper, we show how the quality of the approximation can be improved online by pruning/adding nondominated policies during reward elicitation, while maintaining computational tractability. Drawing insights from the POMDP literature, we also develop a new anytime algorithm for constructing the set of nondominated policies with provable (anytime) error bounds. These bounds can be exploited to great effect in our online approximation scheme.

上一篇:Eliciting Additive Reward Functions for Markov Decision Processes

下一篇:Finding ( α, θ )-Solutions via Sampled SCSPs

用户评价
全部评价

热门资源

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...