Abstract
We consider a team of reinforcement learning agents that concurrently learn to operate in a com mon environment. We identify three properties – adaptivity, commitment, and diversity – which are necessary for efficient coordinated exploratio and demonstrate that straightforward extensions to single-agent optimistic and posterior sampling approaches fail to satisfy them. As an alternative, we propose seed sampling, which extends posterior sampling in a manner that meets these requirements. Simulation results investigate how per-agent regret decreases as the number of agents grows, establishing substantial advantages of seed sampling over alternative exploration schemes.