资源论文No-Regret Learning in Extensive-Form Games with Imperfect Recall

No-Regret Learning in Extensive-Form Games with Imperfect Recall

2020-02-28 | |  52 |   43 |   0

Abstract

Counterfactual Regret Minimization (CFR) is an efficient no-regret learning algorithm for decision problems modeled as extensive games. CFR’s regret bounds depend on the requirement of perfect recall: players always remember information that was revealed to them and the order in which it was revealed. In games without perfect recall, however, CFR’s guarantees do not apply. In this paper, we present the first regret bound fo CFR when applied to a general class of games with imperfect recall. We also show that CFR applied to any abstraction belonging to our class results in a regret bound not just for the abstract game, but for the full game as well. We verify our theory and show how imperfect recall can be used to trade a small increase in regret for a significant reduction in memory in three domains: die-roll poker, phantom tic-tac-toe, and Bluff.

上一篇:Approximate Dynamic Programming By Minimizing Distributionally Robust Bounds

下一篇:Local Loss Optimization in Operator Models: A New Insight into Spectral Learning

用户评价
全部评价

热门资源

  • Learning to Predi...

    Much of model-based reinforcement learning invo...

  • Stratified Strate...

    In this paper we introduce Stratified Strategy ...

  • The Variational S...

    Unlike traditional images which do not offer in...

  • A Mathematical Mo...

    Direct democracy, where each voter casts one vo...

  • Rating-Boosted La...

    The performance of a recommendation system reli...