In this paper, we obtain improved running times for regression and top eigenvector computation for numerically sparse matrices. Given a data matrix A where every row a has and numerical sparsity at most s, i.e. , we provide faster algorithms for these problems in many parameter settings. For top eigenvector computation, we obtain a running time of where gap > 0 is the relative gap between the top two eigenvectors of A> A and r is the stable rank of A. This running time improves upon the previous best unaccelerated running time of For regression, we obtain a running time of where µ > 0 is the smallest eigenvalue of A> A. This running time improves upon the previous best unaccelerated running time of . This result expands the regimes where regression can be solved in nearly linear time from when . Furthermore, we obtain similar improvements even when row norms and numerical sparsities are non-uniform and we show how to achieve even faster running times by accelerating using approximate proximal point [9] / catalyst [15]. Our running times depend only on the size of the input and natural numerical measures of the matrix, i.e. eigenvalues and `p norms, making progress on a key open problem regarding optimal running times for efficient large-scale learning.