We consider the problem of learning the optimal action-value function in the discountedreward Markov decision processes (MDPs). We prove a new PAC bound on the samplecomplexity of model-based value iteration algorithm in the presence of the generative model, which indicates that for an MDP with N state-action pairs and the discount factor samples are required to find an -optimal estimation of the action-value function with the probability 1 . We also prove a matching lower bound of on the sample complexity of estimating the optimal action-value function by every RL algorithm. To the best of our knowledge, this is the first matching result on the sample complexity of estimating the optimal (action) value function in which the upper bound matches the lower bound of RL in terms of N . Also, both our lower bound and our upper bound significantly improve on the state-of-the-art in terms of 1/(1 ).