Abstract
We propose a nonconvex estimator for the covariate adjusted precision matrix estimation problem in the high dimensional regime, under sparsity constraints. To solve this estimator, we propose a alternating gradient descent algorithm with hard thresholding. Compared with existing methods along this line of research, which lack theoretica guarantees in optimization error and/or statistica error, the proposed algorithm not only is computationally much more efficient with a linear rate of convergence, but also attains the optimal statistical rate up to a logarithmic factor. Thorough experiments on both synthetic and real data support our theory.