Abstract
We study the K-armed dueling bandit problem which is a variation of the classical Multi-Armed Bandit (MAB) problem in which the learner receives only relative feedback about the selected pairs of arms. We propose an efficient algorithm called Relative Exponential-weight algorithm for Exploration and Exploitation (REX 3) to handle the adversarial utility-based formulation of this problem. We prove a finite time p expected regret upper bound of order for this algorithm and a general lower bound of order At the end, we provide experimental results using real data from information retrieval applications.