We consider infinite-horizon stationary -discounted Markov Decision Processes, for which it is known that there exists a stationary optimal policy. Using Value and Policy Iteration with some error at each iteration, it is well-known that one can compute stationary policies that are -optimal. After arguing that this guarantee is tight, we develop variations of Value and Policy Iteration for com-puting non-stationary policies that can be up to -optimal, which constitutes a significant improvement in the usual situation when ? is close to 1. Surprisingly, this shows that the problem of “computing near-optimal non-stationary policies” is much simpler than that of “computing near-optimal stationary policies”.