Abstract
We propose computationally efficient algorithms for online linear optimization with bandit feedback, in which a player chooses an action vector from a given (possibly infinite) set A ⊆ , and then suffers a loss that can be expressed as a linear function in action vectors. Although existing algorithms achieve an optimal regret bound of (√T) for T rounds (ignoring factors of poly(d,logT)), computationally efficient ways of implementing them have not yet been specified, in particular when|A|is not bounded by a polynomial size in d. A standard way to pursue computational efficiency is to assume that we have an efficient algorithm referred to as oracle that solves (offline) linear optimization problems over A. Under this assumption, the computational efficiency of a bandit algorithm can then be measured in terms of oracle complexity, i.e., the number of oracle calls. Our contribution is to propose algorithms that offer optimal regret bounds of (√T) as well as low oracle complexity for both non-stochastic settings and stochastic settings. Our algorithm for non-stochastic settings has an oracle complexity of (T) and is the first algorithm that achieves both a regret bound of (√T) and an oracle complexity of (poly(T)), given only linear optimization oracles. Our algorithm for stochastic settings calls the oracle only O(poly(d,logT)) times, whichissmallerthanthecurrentbestoraclecomplexityof O(T) if T issufficiently large.