We present a randomized primal-dual algorithm thatpsolves the problem to additive error in time for matrix A with larger dimension n and nnz(A) nonzero p entries. This improves the best known exact gradient methods by a factor of and is faster p than fully stochastic gradient methods in the accurate and/or sparse regime Our results hold for x, y in the simplex (matrix games, linear programming) and for x in an ball and y in the simplex (perceptron / SVM, minimum enclosing ball). Our algorithm combines the Nemirovski’s “conceptual prox-method” and a novel reduced-variance gradient estimator based on “sampling from the difference” between the current iterate and a reference point.