In this paper, we propose a new technique named Stochastic Path-Integrated Differential EstimatoR (S PIDER), which can be used to track many deterministic quantities of interests with significantly reduced computational cost. Combining S PIDER with the method of normalized gradient descent, we propose S PIDER-SFO that solve non-convex stochastic optimization problems using stochastic gradients only. We provide a few error-bound results on its convergence rates. Specially, we prove that the S PIDER-SFO algorithm achieves a gradient computation cost of to find an approximate first-order stationary point. In addition, we prove that S PIDER-SFO nearly matches the algorithmic lower bound for finding stationary point under the gradient Lipschitz assumption in the finite-sum setting. Our S PIDER technique can be further applied to find an -approximate second-orderstationary point at a gradient computation cost of