This is a theoretical study on the sample complexity of dictionary learning with general type of reconstruction losses. The goal is to estimate a m × d matrix D of unit-norm columns when the only available information is a set of training samples. Points x in are subsequently approximated by the linear combination Da after solving the problem min with function g being either an indicator function or a sparsity promoting regularizer. Here is considered the case where and h is an even and univariate function on the real line. Connections are drawn between and the Moreau envelope of h. A new sample complexity result concerning the k-sparse dictionary problem removes the spurious condition regarding the coherence of D appearing in previous works. Finally comments are made on the approximation error of certain families of losses. The p derived generalization bounds are of order .