2 2 We prove that samples are necessary and sufficient for learning a mixture of k Gaussians in , up to error in total variation distance. This improves both the known upper bounds and lower bounds for this problem. For mixtures 2 of axis-aligned Gaussians, we show that samples suffice, matching a known lower bound. The upper bound is based on a novel technique for distribution learning based on a notion of sample compression. Any class of distributions that allows such a sample compression scheme can also be learned with few samples. Moreover, if a class of distributions has such a compression scheme, then so do the classes of products and mixtures of those distributions. The core of our main result is showing that the class of Gaussians in has a small-sized sample compression.