We study the effects of approximate inference on the performance of Thompson sampling in the k-armed bandit problems. Thompson sampling is a successful algorithm for online decision-making but requires posterior inference, which often must be approximated in practice. We show that even small constant inference error (in -divergence) can lead to poor performance (linear regret) due to underexploration (for < 1) or over-exploration (for > 0) by the approximation. While for > 0 this is unavoidable, for 0 the regret can be improved by adding a small amount of forced exploration even when the inference error is a large constant.