资源论文Ef?cient Planning for Factored In?nite-Horizon DEC-POMDPs Joni Pajarinen Jaakko Peltonen

Ef?cient Planning for Factored In?nite-Horizon DEC-POMDPs Joni Pajarinen Jaakko Peltonen

2019-11-12 | |  64 |   44 |   0
Abstract Decentralized partially observable Markov decision processes (DEC-POMDPs) are used to plan policies for multiple agents that must maximize a joint reward function but do not communicate with each other. The agents act under uncertainty about each other and the environment. This planning task arises in optimization of wireless networks, and other scenarios where communication between agents is restricted by costs or physical limits. DEC-POMDPs are a promising solution, but optimizing policies quickly becomes computationally intractable when problem size grows. Factored DEC-POMDPs allow large problems to be described in compact form, but have the same worst case complexity as non-factored DEC-POMDPs. We propose an ef?cient optimization algorithm for large factored in?nite-horizon DEC-POMDPs. We formulate expectation-maximization based optimization into a new form, where complexity can be kept tractable by factored approximations. Our method performs well, and it can solve problems with more agents and larger state spaces than state of the art DEC-POMDP methods. We give results for factored in?nite-horizon DEC-POMDP problems with up to 10 agents.

上一篇:On the Complexity of Voting Manipulation under Randomized Tie-Breaking

下一篇:An Interaction-Oriented Model for Multi-Scale Simulation Sébastien Picault and Philippe Mathieu

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • Learning to learn...

    The move from hand-designed features to learned...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...