In this paper we study the well-known greedy coordinate descent (GCD) algorithm to solve 1 -regularized problems and improve GCD by the two popular strategies: Nesterov’s acceleration and stochastic optimization. Firstly, based on an 1 -norm square approximation, we propose a new rule for greedy selection which is nontrivial to solve but convex; then an efficient algorithm called “SOft ThreshOlding PrOjection (SOTOPO)” is proposed to exactly solve an 1 -regularized 1 -norm square approximation problem, which is induced by the new rule. Based on the new rule and the SOTOPO algorithm, the Nesterov’s acceleration and stochastic optimization strategies are then successfully applied to the GCD algorithm. The resulted algorithm called accelerated stochastic p greedy coordinate descent (ASGCD) has the optimal convergence rate ; meanwhile, it reduces the iteration complexity of greedy selection up to a factor of sample size. Both theoretically and empirically, we show that ASGCD has better performance for high-dimensional and dense problems with sparse solutions.