Abstract
Given a Markov Decision Process (MDP) with n states and m actions per state, we study the number of iterations needed by Policy Iteration (PI) algorithms to converge to the optimal “-discounted optimal policy. We consider two variations of PI: Howard’s PI that changes the actions in all states with a positive advantage, and Simplex-PI that only changes the action in the state with maximal Ïadvantage. 1 We2Ìshow that 1 Howard’s 1 PI 22 terminates 1 1 1 after at most iterations, improving by a factor O(log 1 n) a result by1 [3], while 22 Simplex-PI 1 2 terminates 1 22 2 2 1 1 after at most iterations, improving by a factor O(log n) a result by [11]. Under some structural assumptions of the MDP, we then consider bounds that are independent of the discount factor “: given a measure of the maximal transient time ·t and the maximal time ·r to revisit states in recurrent classes under all policies, we show that Simplex-PI # terminates after at most iterations. This generalizes a recent result for deterministic MDPs by [8], in which We explain why similar results seem hard to derive for Howard’s PI. Finally, under the additional (restrictive) assumption that the state space is partitioned in two sets, respectively states that are transient and recurrent for all policies, we show that Howard’s PI terminates after at most iterations while Simplex-PI terminates after iterations.