We propose stochastic optimization algorithms that can find local minima faster than existing algorithms for nonconvex optimization problems, by exploiting the third-order smoothness to escape non-degenerate saddle points more efficiently. More specifically, the proposed algorithm only needs stochastic gradient evaluations to converge to an approximate p local minimum x, which satisfies and in unconstrained stochastic optimiza-tion,where hides logarithm polynomial terms and constants. This improvesupon the gradient complexity achieved by the state-of-the-art stochastic local minima finding algorithms by a factor of Experiments on two nonconvex optimization problems demonstrate the effectiveness of our algorithm and corroborate our theory.