Abstract
In this paper, we study the multi-objective bandits
(MOB) problem, where a learner repeatedly selects
one arm to play and then receives a reward vector
consisting of multiple objectives. MOB has found
many real-world applications as varied as online
recommendation and network routing. On the other
hand, these applications typically contain contextual information that can guide the learning process which, however, is ignored by most of existing work. To utilize this information, we associate
each arm with a context vector and assume the reward follows the generalized linear model (GLM).
We adopt the notion of Pareto regret to evaluate
the learner’s performance and develop a novel algorithm for minimizing it. The essential idea is to apply a variant of the online Newton step to estimate
model parameters, based on which we utilize the
upper confidence bound (UCB) policy to construct
an approximation of the Pareto front, and then uniformly at random choose one arm from the approximate Pareto front. Theoretical analysis shows
that the proposed algorithm achieves an O˜(d?T)
Pareto regret, where T is the time horizon and d is
the dimension of contexts, which matches the optimal result for single objective contextual bandits
problem. Numerical experiments demonstrate the
effectiveness of our method