We give faster algorithms and improved sample complexities for the fundamental problem of estimating the top eigenvector. Given an explicit matrix we show how to compute an approximate h top eigenvector i of in time Here nnz(A) is the number of nonzeros in A, sr(A) is the stable rank, and gap is the relative eigengap. We also consider an online setting in which, given a stream of i.i.d. samples from a distribution D with covariance matrix Σ and a vector x0 which is an O(gap) approximate top eigenvector for Σ, we show how to refine x0 to an approxv(D) imation using samples from D. Here v(D) is a natural notion of variance. Combining our algorithm with previous work to initialize x0 , we obtain improved sample complexities and runtimes under a variety of assumptions on D. We achieve our results via a robust analysis of the classic shift-and-invert preconditioning method. This technique lets us reduce eigenvector computation to approximately solving a series of linear systems with fast stochastic gradient methods. Proceedings of the 33 rd International Conference on MachLearning, New York, NY, USA, 2016. JMLR: W&CP volume48. Copyright 2016 by the author(s).